search for: preempt_disable

Displaying 20 results from an estimated 288 matches for "preempt_disable".

2020 Nov 03
0
[patch V3 06/37] highmem: Provide generic variant of kmap_atomic*
...ge *p * be used in IRQ contexts, so in some (very limited) cases we need * it. */ + +#ifndef CONFIG_KMAP_LOCAL +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); +void kunmap_atomic_high(void *kvaddr); + static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); @@ -89,7 +101,38 @@ static inline void *kmap_atomic_prot(str return page_address(page); return kmap_atomic_high_prot(page, prot); } -#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot) + +static inline void __kunmap_atomic(void *vaddr) +{ + kunmap_atomic_high(vaddr); +} +#else /* !...
2018 Jul 04
1
[PATCH net-next v5 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
..., rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); > >> + > >> + vhost_disable_notify(&net->dev, vq); > >> + sock = rvq->private_data; > >> + busyloop_timeout = rx ? rvq->busyloop_timeout : tvq->busyloop_timeout; > >> + > >> + preempt_disable(); > >> + endtime = busy_clock() + busyloop_timeout; > >> + while (vhost_can_busy_poll(tvq->dev, endtime) && > >> + !(sock && sk_has_rx_data(sock->sk)) && > >> + vhost_vq_avail_empty(tvq->dev, tvq)) >...
2018 Jul 04
2
[PATCH net-next v5 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...rx ? tvq : rvq; > + > + mutex_lock_nested(&vq->mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); > + > + vhost_disable_notify(&net->dev, vq); > + sock = rvq->private_data; > + busyloop_timeout = rx ? rvq->busyloop_timeout : tvq->busyloop_timeout; > + > + preempt_disable(); > + endtime = busy_clock() + busyloop_timeout; > + while (vhost_can_busy_poll(tvq->dev, endtime) && > + !(sock && sk_has_rx_data(sock->sk)) && > + vhost_vq_avail_empty(tvq->dev, tvq)) > + cpu_relax(); > + preempt_enable(); > +...
2018 Jul 04
2
[PATCH net-next v5 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...rx ? tvq : rvq; > + > + mutex_lock_nested(&vq->mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); > + > + vhost_disable_notify(&net->dev, vq); > + sock = rvq->private_data; > + busyloop_timeout = rx ? rvq->busyloop_timeout : tvq->busyloop_timeout; > + > + preempt_disable(); > + endtime = busy_clock() + busyloop_timeout; > + while (vhost_can_busy_poll(tvq->dev, endtime) && > + !(sock && sk_has_rx_data(sock->sk)) && > + vhost_vq_avail_empty(tvq->dev, tvq)) > + cpu_relax(); > + preempt_enable(); > +...
2016 Mar 21
1
[Xen-devel] [PATCH v2 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu()
...ync_call_struct { > static void smp_call_sync_callback(struct work_struct *work) > { > struct smp_sync_call_struct *sscs; > + unsigned int cpu = smp_processor_id(); So this obtains the vCPU number, yet ... > sscs = container_of(work, struct smp_sync_call_struct, work); > + preempt_disable(); > + hypervisor_pin_vcpu(cpu); ... here you're supposed to pass a pCPU number. Also don't you need to call smp_processor_id() after preempt_disable()? Jan
2016 Mar 21
0
[Xen-devel] [PATCH v2 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu()
...smp_call_sync_callback(struct work_struct *work) >> { >> struct smp_sync_call_struct *sscs; >> + unsigned int cpu = smp_processor_id(); > > So this obtains the vCPU number, yet ... > >> sscs = container_of(work, struct smp_sync_call_struct, work); >> + preempt_disable(); >> + hypervisor_pin_vcpu(cpu); > > ... here you're supposed to pass a pCPU number. > > Also don't you need to call smp_processor_id() after preempt_disable()? No, I'm running on the workqueue bound to the specific (v)cpu and I'm expecting this vcpu to be pinn...
2016 Mar 21
1
[Xen-devel] [PATCH v2 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu()
...ync_call_struct { > static void smp_call_sync_callback(struct work_struct *work) > { > struct smp_sync_call_struct *sscs; > + unsigned int cpu = smp_processor_id(); So this obtains the vCPU number, yet ... > sscs = container_of(work, struct smp_sync_call_struct, work); > + preempt_disable(); > + hypervisor_pin_vcpu(cpu); ... here you're supposed to pass a pCPU number. Also don't you need to call smp_processor_id() after preempt_disable()? Jan
2007 Apr 18
1
[patch 14/21] Xen-paravirt: Add XEN config options and disableunsupported config options.
...option reduces the latency of the kernel by making > >>> all kernel code (that is not executing in a critical section) > >>> > >>> > > > > Oh, so that's why it doesn't break when CONFIG_PREEMPT=y. > In which case > > that preempt_disable() I spotted is wrong-and-unneeded. > > > > Why doesn't Xen work with preemption?? > > > > I've forgotten the details. Ian? Keir? Steven? Maybe it > can be done. With CONFIG_PREEMPT, we can have preempted threads reference machine addresses across save/re...
2007 Apr 18
1
[patch 14/21] Xen-paravirt: Add XEN config options and disableunsupported config options.
...option reduces the latency of the kernel by making > >>> all kernel code (that is not executing in a critical section) > >>> > >>> > > > > Oh, so that's why it doesn't break when CONFIG_PREEMPT=y. > In which case > > that preempt_disable() I spotted is wrong-and-unneeded. > > > > Why doesn't Xen work with preemption?? > > > > I've forgotten the details. Ian? Keir? Steven? Maybe it > can be done. With CONFIG_PREEMPT, we can have preempted threads reference machine addresses across save/re...
2007 Apr 18
1
[patch 14/21] Xen-paravirt: Add XEN config options and disableunsupported config options.
...option reduces the latency of the kernel by making > >>> all kernel code (that is not executing in a critical section) > >>> > >>> > > > > Oh, so that's why it doesn't break when CONFIG_PREEMPT=y. > In which case > > that preempt_disable() I spotted is wrong-and-unneeded. > > > > Why doesn't Xen work with preemption?? > > > > I've forgotten the details. Ian? Keir? Steven? Maybe it > can be done. With CONFIG_PREEMPT, we can have preempted threads reference machine addresses across save/re...
2020 Nov 03
0
[patch V3 22/37] highmem: High implementation details and document API
...geHighMem(page)) + return; + kunmap_high(page); +} + +static inline struct page *kmap_to_page(void *addr) +{ + return __kmap_to_page(addr); +} + +static inline void kmap_flush_unused(void) +{ + __kmap_flush_unused(); +} + +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_local_page_prot(page, prot); +} + +static inline void *kmap_atomic(struct page *page) +{ + return kmap_atomic_prot(page, kmap_prot); +} + +static inline void *kmap_atomic_pfn(unsigned long pfn) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_...
2008 Mar 31
2
[01/17]PATCH Add API for allocating dynamic TR resouce. V8
Hi Xiantao, I general I think the code in this patch is fine. I have a couple of nit-picking comments: > + if (target_mask&0x1) { The formatting here isn't quite what most of the kernel does. It would be better if you added spaces so it's a little easier to read, ie: if (target_mask & 0x1) { > + p = &__per_cpu_idtrs[cpu][0][0]; > + for (i = IA64_TR_ALLOC_BASE;
2008 Mar 31
2
[01/17]PATCH Add API for allocating dynamic TR resouce. V8
Hi Xiantao, I general I think the code in this patch is fine. I have a couple of nit-picking comments: > + if (target_mask&0x1) { The formatting here isn't quite what most of the kernel does. It would be better if you added spaces so it's a little easier to read, ie: if (target_mask & 0x1) { > + p = &__per_cpu_idtrs[cpu][0][0]; > + for (i = IA64_TR_ALLOC_BASE;
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...rx ? tvq : rvq; > + > + mutex_lock_nested(&vq->mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); > + > + vhost_disable_notify(&net->dev, vq); > + sock = rvq->private_data; > + busyloop_timeout = rx ? rvq->busyloop_timeout : tvq->busyloop_timeout; > + > + preempt_disable(); > + endtime = busy_clock() + busyloop_timeout; > + while (vhost_can_busy_poll(tvq->dev, endtime) && > + !(sock && sk_has_rx_data(sock->sk)) && > + vhost_vq_avail_empty(tvq->dev, tvq)) > + cpu_relax(); > + preempt_enable(); > +...
2018 Jul 03
2
[PATCH net-next v4 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...rx ? tvq : rvq; > + > + mutex_lock_nested(&vq->mutex, rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); > + > + vhost_disable_notify(&net->dev, vq); > + sock = rvq->private_data; > + busyloop_timeout = rx ? rvq->busyloop_timeout : tvq->busyloop_timeout; > + > + preempt_disable(); > + endtime = busy_clock() + busyloop_timeout; > + while (vhost_can_busy_poll(tvq->dev, endtime) && > + !(sock && sk_has_rx_data(sock->sk)) && > + vhost_vq_avail_empty(tvq->dev, tvq)) > + cpu_relax(); > + preempt_enable(); > +...
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
..._vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); > + > + if (vq->busyloop_timeout) { > + preempt_disable(); > + endtime = busy_clock() + vq->busyloop_timeout; > + while (vhost_can_busy_poll(vq->dev, endtime) && > + !vhost_vq_more_avail(vq->dev, vq)) > + cpu_relax(); > + preempt_enable(); > + } Isn't there a way to call all this after vhost_get_vq_de...
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
..._vq_desc(struct vhost_net *net, > + struct vhost_virtqueue *vq, > + struct iovec iov[], unsigned int iov_size, > + unsigned int *out_num, unsigned int *in_num) > +{ > + unsigned long uninitialized_var(endtime); > + > + if (vq->busyloop_timeout) { > + preempt_disable(); > + endtime = busy_clock() + vq->busyloop_timeout; > + while (vhost_can_busy_poll(vq->dev, endtime) && > + !vhost_vq_more_avail(vq->dev, vq)) > + cpu_relax(); > + preempt_enable(); > + } Isn't there a way to call all this after vhost_get_vq_de...
2018 Jul 02
5
[PATCH net-next v4 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v3 -> v4: fix some issues v2 -> v3: This patches are splited from previous big patch:
2018 Jun 30
9
[PATCH net-next v3 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. This patches are splited from previous big patch: http://patchwork.ozlabs.org/patch/934673/ For more performance report, see patch 4. Tonghao Zhang (4): net: vhost:
2016 Jan 21
1
[PATCH V2 3/3] vhost_net: basic polling support
...st_virtqueue *vq, > >>+ struct iovec iov[], unsigned int iov_size, > >>+ unsigned int *out_num, unsigned int *in_num) > >>+{ > >>+ unsigned long uninitialized_var(endtime); > >>+ > >>+ if (vq->busyloop_timeout) { > >>+ preempt_disable(); > >>+ endtime = busy_clock() + vq->busyloop_timeout; > >>+ while (vhost_can_busy_poll(vq->dev, endtime) && > >>+ !vhost_vq_more_avail(vq->dev, vq)) > >>+ cpu_relax(); > >>+ preempt_enable(); > >>+ } > > &gt...