search for: preemption

Displaying 20 results from an estimated 449 matches for "preemption".

2017 Nov 07
4
Call preemption
Hello, Has anyone already implemented some sort of call preemption in Asterisk ? I am trying to achieve something like this : - I want to limit the number of calls on a given SIP peer to 10 - on the other hand, some calls have higher priority than others - when the ceiling of 10 calls is reached and a call with a high priority is attempted, I would like to dr...
2011 Jan 18
2
Surprise Thread Preemptions
...hreads will be preempted by which on my OpenSolaris machine. Therefore, I ran a multithreaded program "myprogram" with 32 threads on my 24-core Solaris machine. I make sure that each thread of my program has same priority (priority zero), so that we can reduce priority inversions (saving preemptions -- system overhead). However, I ran the following script whoprempt.d to see who preempted myprogram threads and got the following output Unlike what I thought, myprogram threads are preempted (for 2796 times -- last line of the output) by the threads of same myprogram. Could anyone explain why t...
2011 Dec 12
1
[LLVMdev] Preemption with LLVM
Hey all, I'm investigating LLVM for use for a future project of mine, and I was wondering whether something is possible. Specifically, I'm wondering if there's a way to force preemption of a green thread-style task - something like Erlang's "processes", where if a task executes for too long, it is preempted. [1] My main goal here is to avoid having to write my own virtual machine - the JVM is, as far as I can tell, not appropriate for when you want to create hundred...
2007 Mar 16
4
Re: Fwd: Re: struct page field arrangement
Btw., another question that made me wonder already when doing the original patch: why is it that x86-64 properly uses locking for mm_pin_all(), yet i386 doesn''t need to? Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...> schemes. It is a compromise to provide some lock unfairness without > sacrificing the good cacheline behavior of the queue spinlock. But but but,.. any kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because with queueing you not only have to worry about the lock holder getting preemption, but also the waiter(s). Take the situation of 3 (v)CPUs where cpu0 holds the lock but is preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after whic...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...> schemes. It is a compromise to provide some lock unfairness without > sacrificing the good cacheline behavior of the queue spinlock. But but but,.. any kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because with queueing you not only have to worry about the lock holder getting preemption, but also the waiter(s). Take the situation of 3 (v)CPUs where cpu0 holds the lock but is preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after whic...
2016 Jul 05
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
Hi Xinhui, 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>: > This is to fix some lock holder preemption issues. Some other locks > implementation do a spin loop before acquiring the lock itself. Currently > kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu > as parameter and return true if the cpu is preempted. Then kernel can break > the spin loops upon on the r...
2016 Jul 05
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
Hi Xinhui, 2016-06-28 22:43 GMT+08:00 Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>: > This is to fix some lock holder preemption issues. Some other locks > implementation do a spin loop before acquiring the lock itself. Currently > kernel has an interface of bool vcpu_is_preempted(int cpu). It take the cpu > as parameter and return true if the cpu is preempted. Then kernel can break > the spin loops upon on the r...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
...lier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g 400 -p && perf report 18.09% sched-messaging [kernel.vmlinux] [k] osq_lock 12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock 3.89% sched-messag...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
...lier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g 400 -p && perf report 18.09% sched-messaging [kernel.vmlinux] [k] osq_lock 12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock 3.89% sched-messag...
2020 Nov 03
0
[patch V3 25/37] mm/highmem: Provide kmap_local*
...e incoming task are restored. That's obviously slow, but highmem is slow anyway. The kmap_local.*() functions can be invoked from both preemptible and atomic context. kmap local sections disable migration to keep the resulting virtual mapping address correct, but disable neither pagefaults nor preemption. A wholesale conversion of kmap_atomic to be fully preemptible is not possible because some of the usage sites might rely on the preemption disable for serialization or on the implicit pagefault disable. Needs to be done on a case by case basis. Signed-off-by: Thomas Gleixner <tglx at linutron...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ck unfairness without > >>sacrificing the good cacheline behavior of the queue spinlock. > >But but but,.. any kind of queueing gets you into a world of hurt with > >virt. > > > >The simple test-and-set lock (as per the above) still sucks due to lock > >holder preemption, but at least the suckage doesn't queue. Because with > >queueing you not only have to worry about the lock holder getting > >preemption, but also the waiter(s). > > > >Take the situation of 3 (v)CPUs where cpu0 holds the lock but is > >preempted. cpu1 queues, cpu2...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ck unfairness without > >>sacrificing the good cacheline behavior of the queue spinlock. > >But but but,.. any kind of queueing gets you into a world of hurt with > >virt. > > > >The simple test-and-set lock (as per the above) still sucks due to lock > >holder preemption, but at least the suckage doesn't queue. Because with > >queueing you not only have to worry about the lock holder getting > >preemption, but also the waiter(s). > > > >Take the situation of 3 (v)CPUs where cpu0 holds the lock but is > >preempted. cpu1 queues, cpu2...
2006 Mar 28
7
context switch
In debugging the sles9 port on 64 bit MP machines, I am seeing a problem where the hypervisor takes a fault in loading fs in the context switch code (load_segments()). The selector is one of the TLS selectors. It appears that the cpu in question has updated this selector with a value of 0 just prior to the problem I am seeing. Looking at the Linux context switch code, we first update the TLS
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
...lier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g 400 -p && perf report before patch: 18.09% sched-messaging [kernel.vmlinux] [k] osq_lock 12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock 3.89%...
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
...lier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g 400 -p && perf report before patch: 18.09% sched-messaging [kernel.vmlinux] [k] osq_lock 12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner 5.27% sched-messaging [kernel.vmlinux] [k] mutex_unlock 3.89%...
2017 Oct 11
0
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
...Fails to take oom_lock and loop forever __alloc_pages_may_oom() uses mutex_trylock(&oom_lock). I think the second __alloc_pages_may_oom() will not continue since the first one is in progress. > > By the way, is xb_set_page() safe? > Sleeping in the kernel with preemption disabled is a bug, isn't it? > __radix_tree_preload() returns 0 with preemption disabled upon success. > xb_preload() disables preemption if __radix_tree_preload() fails. > Then, kmalloc() is called with preemption disabled, isn't it? > But xb_set_page() calls xb_preload(GFP_KER...
2017 Oct 11
0
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
...k one solution would be to let the OOM uses the old leak_balloon() code path, and we can add one more parameter to leak_balloon to control that: leak_balloon(struct virtio_balloon *vb, size_t num, bool oom) >>> By the way, is xb_set_page() safe? >>> Sleeping in the kernel with preemption disabled is a bug, isn't it? >>> __radix_tree_preload() returns 0 with preemption disabled upon success. >>> xb_preload() disables preemption if __radix_tree_preload() fails. >>> Then, kmalloc() is called with preemption disabled, isn't it? >>> But xb_set...