search for: local_irq_save

Displaying 20 results from an estimated 253 matches for "local_irq_save".

2012 Dec 06
1
Question on local_irq_save/local_irq_retore
Hi, I have some confusion on local_irq_save() and local_irq_restore(). From the definitions, you can see that local_irq_save() calls local_irq_disable(). But why there is no local_irq_enable() in local_irq_restore? #define local_irq_save(x) ({ local_save_flags(x); local_irq_disable(); }) #define local_irq_restore(x) ({ BUILD_BU...
2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...he more > problems I keep finding... bah bah bah. That is, most of these irq-tracking problem are new because commit: 859d069ee1dd ("lockdep: Prepare for NMI IRQ state tracking") changed irq-tracking to ignore the lockdep recursion count. This then allows: lock_acquire() raw_local_irq_save(); current->lockdep_recursion++; trace_lock_acquire() ... tracing ... #PF under raw_local_irq_*() __lock_acquire() arch_spin_lock(&graph_lock) pv-spinlock-wait() local_irq_save() under raw_local_irq_*() However afaict that just made a bad situation w...
2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...he more > problems I keep finding... bah bah bah. That is, most of these irq-tracking problem are new because commit: 859d069ee1dd ("lockdep: Prepare for NMI IRQ state tracking") changed irq-tracking to ignore the lockdep recursion count. This then allows: lock_acquire() raw_local_irq_save(); current->lockdep_recursion++; trace_lock_acquire() ... tracing ... #PF under raw_local_irq_*() __lock_acquire() arch_spin_lock(&graph_lock) pv-spinlock-wait() local_irq_save() under raw_local_irq_*() However afaict that just made a bad situation w...
2007 Nov 26
0
[PATCH] [Mini-OS] Make gnttab allocation/free safe
...h> #define NR_RESERVED_ENTRIES 8 @@ -31,20 +32,29 @@ static grant_entry_t *gnttab_table; static grant_ref_t gnttab_list[NR_GRANT_ENTRIES]; +static __DECLARE_SEMAPHORE_GENERIC(gnttab_sem, NR_GRANT_ENTRIES); static void put_free_entry(grant_ref_t ref) { + unsigned long flags; + local_irq_save(flags); gnttab_list[ref] = gnttab_list[0]; gnttab_list[0] = ref; - + local_irq_restore(flags); + up(&gnttab_sem); } static grant_ref_t get_free_entry(void) { - unsigned int ref = gnttab_list[0]; + unsigned int ref; + unsigned long flags; + down(&gnttab_sem...
2020 Aug 05
9
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Wed, Aug 05, 2020 at 03:59:40PM +0200, Marco Elver wrote: > On Wed, Aug 05, 2020 at 03:42PM +0200, peterz at infradead.org wrote: > > Shouldn't we __always_inline those? They're going to be really small. > > I can send a v2, and you can choose. For reference, though: > > ffffffff86271ee0 <arch_local_save_flags>: > ffffffff86271ee0: 0f 1f 44 00 00
2020 Aug 05
9
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Wed, Aug 05, 2020 at 03:59:40PM +0200, Marco Elver wrote: > On Wed, Aug 05, 2020 at 03:42PM +0200, peterz at infradead.org wrote: > > Shouldn't we __always_inline those? They're going to be really small. > > I can send a v2, and you can choose. For reference, though: > > ffffffff86271ee0 <arch_local_save_flags>: > ffffffff86271ee0: 0f 1f 44 00 00
2020 Aug 06
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Thu, Aug 06, 2020 at 09:47:23AM +0200, Marco Elver wrote: > Testing my hypothesis that raw then nested non-raw > local_irq_save/restore() breaks IRQ state tracking -- see the reproducer > below. This is at least 1 case I can think of that we're bound to hit. Aaargh! > diff --git a/init/main.c b/init/main.c > index 15bd0efff3df..0873319dcff4 100644 > --- a/init/main.c > +++ b/init/main.c > @@ -1041,6...
2020 Aug 12
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...200, peterz at infradead.org wrote: > > > > > So let me once again see if I can't find a better solution for this all. > > > Clearly it needs one :/ > > > > So the below boots without triggering the debug code from Marco -- it > > should allow nesting local_irq_save/restore under raw_local_irq_*(). > > > > I tried unconditional counting, but there's some _reallly_ wonky / > > asymmetric code that wrecks that and I've not been able to come up with > > anything useful. > > > > This one starts counting when local_irq_...
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
...39;t passed > + * __GFP_WAIT, of course). > + * > + * Will not fail if passed __GFP_WAIT. > + */ > +int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp) > +{ > + DEFINE_WAIT(wait); > + struct percpu_ida_cpu *tags; > + unsigned long flags; > + int tag; > + > + local_irq_save(flags); > + tags = this_cpu_ptr(pool->tag_cpu); > + > + /* Fastpath */ > + tag = alloc_local_tag(pool, tags); > + if (likely(tag >= 0)) { > + local_irq_restore(flags); > + return tag; > + } > + > + while (1) { > + spin_lock(&pool->lock); > + >...
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
...39;t passed > + * __GFP_WAIT, of course). > + * > + * Will not fail if passed __GFP_WAIT. > + */ > +int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp) > +{ > + DEFINE_WAIT(wait); > + struct percpu_ida_cpu *tags; > + unsigned long flags; > + int tag; > + > + local_irq_save(flags); > + tags = this_cpu_ptr(pool->tag_cpu); > + > + /* Fastpath */ > + tag = alloc_local_tag(pool, tags); > + if (likely(tag >= 0)) { > + local_irq_restore(flags); > + return tag; > + } > + > + while (1) { > + spin_lock(&pool->lock); > + >...
2020 Aug 11
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...RAVIRT_SPINLOCKS, however, the warnings go away. >>>> >>>> Thanks for testing! >>>> >>>> I take it you are doing the tests in a KVM guest? >>> >>> Yes, correct. >>> >>>> If so I have a gut feeling that the use of local_irq_save() and >>>> local_irq_restore() in kvm_wait() might be fishy. I might be completely >>>> wrong here, though. >>> >>> Happy to help debug more, although I might need patches or pointers >>> what to play with. >>> >>>> BTW, I thin...
2013 Aug 21
1
[PATCH-v3 1/4] idr: Percpu ida
...39;t passed > + * __GFP_WAIT, of course). > + * > + * Will not fail if passed __GFP_WAIT. > + */ > +int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp) > +{ > + DEFINE_WAIT(wait); > + struct percpu_ida_cpu *tags; > + unsigned long flags; > + int tag; > + > + local_irq_save(flags); > + tags = this_cpu_ptr(pool->tag_cpu); You could drop this_cpu_ptr if you pass pool->tag_cpu to alloc_local_tag. > +/** > + * percpu_ida_free - free a tag > + * @pool: pool @tag was allocated from > + * @tag: a tag previously allocated with percpu_ida_alloc() > +...
2013 Aug 21
1
[PATCH-v3 1/4] idr: Percpu ida
...39;t passed > + * __GFP_WAIT, of course). > + * > + * Will not fail if passed __GFP_WAIT. > + */ > +int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp) > +{ > + DEFINE_WAIT(wait); > + struct percpu_ida_cpu *tags; > + unsigned long flags; > + int tag; > + > + local_irq_save(flags); > + tags = this_cpu_ptr(pool->tag_cpu); You could drop this_cpu_ptr if you pass pool->tag_cpu to alloc_local_tag. > +/** > + * percpu_ida_free - free a tag > + * @pool: pool @tag was allocated from > + * @tag: a tag previously allocated with percpu_ida_alloc() > +...
2013 Aug 28
2
[PATCH-v3 1/4] idr: Percpu ida
...> > > + > > > + spin_unlock(&pool->lock); > > > + local_irq_restore(flags); > > > + > > > + if (tag >= 0 || !(gfp & __GFP_WAIT)) > > > + break; > > > + > > > + schedule(); > > > + > > > + local_irq_save(flags); > > > + tags = this_cpu_ptr(pool->tag_cpu); > > > + } > > > > What guarantees that this wait will terminate? > > It seems fairly clear to me from the break statement a couple lines up; > if we were passed __GFP_WAIT we terminate iff we succesfull...
2013 Aug 28
2
[PATCH-v3 1/4] idr: Percpu ida
...> > > + > > > + spin_unlock(&pool->lock); > > > + local_irq_restore(flags); > > > + > > > + if (tag >= 0 || !(gfp & __GFP_WAIT)) > > > + break; > > > + > > > + schedule(); > > > + > > > + local_irq_save(flags); > > > + tags = this_cpu_ptr(pool->tag_cpu); > > > + } > > > > What guarantees that this wait will terminate? > > It seems fairly clear to me from the break statement a couple lines up; > if we were passed __GFP_WAIT we terminate iff we succesfull...
2020 Aug 07
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...t;>> On Thu, Aug 06, 2020 at 01:32PM +0200, peterz at infradead.org wrote: >>>>>>>> On Thu, Aug 06, 2020 at 09:47:23AM +0200, Marco Elver wrote: >>>>>>>>> Testing my hypothesis that raw then nested non-raw >>>>>>>>> local_irq_save/restore() breaks IRQ state tracking -- see the reproducer >>>>>>>>> below. This is at least 1 case I can think of that we're bound to hit. >>>>>>> ... >>>>>>>> >>>>>>>> /me goes ponder things... >...
2018 Nov 06
0
[PATCH v15 23/26] sched: early boot clock
...; + > + /* > + * Set __gtod_offset such that once we mark sched_clock_running, > + * sched_clock_tick() continues where sched_clock() left off. > + * > + * Even if TSC is buggered, we're still UP at this point so it > + * can't really be out of sync. > + */ > + local_irq_save(flags); > + __sched_clock_gtod_offset(); > + local_irq_restore(flags); > + > sched_clock_running = 1; > + > + /* Now that sched_clock_running is set adjust scd */ > + local_irq_save(flags); > + sched_clock_tick(); > + local_irq_restore(flags); > } > /* > *...
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org> Hi folks, This is an updated series for adding tag pre-allocation support of target fabric descriptor memory, utilizing Kent's latest per-cpu ida bits here, along with Christoph Lameter's latest comments: [PATCH 04/10] idr: Percpu ida http://marc.info/?l=linux-kernel&m=137160026006974&w=2 The first patch is a
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org> Hi folks, This is an updated series for adding tag pre-allocation support of target fabric descriptor memory, utilizing Kent's latest per-cpu ida bits here, along with Christoph Lameter's latest comments: [PATCH 04/10] idr: Percpu ida http://marc.info/?l=linux-kernel&m=137160026006974&w=2 The first patch is a
2020 Sep 15
0
[PATCH RFC v1 09/18] x86/hyperv: provide a bunch of helper functions
..._pages -= counts[i]; > + i++; So here we believe we will never overrun the 2048 bytes we 'allocated' for 'counts' above. While 'if (num_pages > HV_DEPOSIT_MAX)' presumably guarantees that, this is not really obvious. > + num_allocations++; > + } > + > + local_irq_save(flags); > + > + input_page = *this_cpu_ptr(hyperv_pcpu_input_arg); > + > + input_page->partition_id = partition_id; > + > + /* Populate gpa_page_list - these will fit on the input page */ > + for (i = 0, page_count = 0; i < num_allocations; ++i) { > + base_pfn = page_...