search for: this_cpu_cmpxchg

Displaying 20 results from an estimated 32 matches for "this_cpu_cmpxchg".

2017 Aug 09
1
[PATCH v13 1/5] Introduce xbitmap
...* 2 - 1) > + > > ... > > +void xb_preload(gfp_t gfp) > +{ > + __radix_tree_preload(gfp, XB_PRELOAD_SIZE); > + if (!this_cpu_read(ida_bitmap)) { > + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); > + > + if (!bitmap) > + return; > + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); > + kfree(bitmap); > + } > +} > +EXPORT_SYMBOL(xb_preload); Please document the exported API. It's conventional to do this in kerneldoc but for some reason kerneldoc makes people write uninteresting and unuseful documentation. Be sure to cover the *use...
2017 Aug 09
1
[PATCH v13 1/5] Introduce xbitmap
...* 2 - 1) > + > > ... > > +void xb_preload(gfp_t gfp) > +{ > + __radix_tree_preload(gfp, XB_PRELOAD_SIZE); > + if (!this_cpu_read(ida_bitmap)) { > + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); > + > + if (!bitmap) > + return; > + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); > + kfree(bitmap); > + } > +} > +EXPORT_SYMBOL(xb_preload); Please document the exported API. It's conventional to do this in kerneldoc but for some reason kerneldoc makes people write uninteresting and unuseful documentation. Be sure to cover the *use...
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2017 Oct 09
4
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote: > +static inline void xb_set_page(struct virtio_balloon *vb, > + struct page *page, > + unsigned long *pfn_min, > + unsigned long *pfn_max) > +{ > + unsigned long pfn = page_to_pfn(page); > + > + *pfn_min = min(pfn, *pfn_min); > + *pfn_max = max(pfn, *pfn_max); > +
2017 Oct 11
0
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
...t be expected, thanks. I plan to change it like this: bool xb_preload(gfp_t gfp) { if (!this_cpu_read(ida_bitmap)) { struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); if (!bitmap) return false; bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); kfree(bitmap); } if (__radix_tree_preload(gfp, XB_PRELOAD_SIZE) < 0) return false; return true; } Best, Wei
2017 Oct 11
0
[PATCH v16 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG
...ool xb_preload(gfp_t gfp) >> { >> if (!this_cpu_read(ida_bitmap)) { >> struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); >> >> if (!bitmap) >> return false; >> bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); >> kfree(bitmap); >> } > Excuse me, but you are allocating per-CPU memory when running CPU might > change at this line? What happens if running CPU has changed at this line? > Will it work even with new CPU's ida_bitmap ==...
2017 Dec 20
2
[PATCH v20 0/7] Virtio-balloon Enhancement
...n path and fold all xbitmap patches into one, and > post only one xbitmap patch without virtio-balloon changes? > > . > > I still think we don't need xb_preload()/xb_preload_end(). Why would you think preload is not needed? The bitmap is allocated via preload "bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap);", this allocated bitmap would be used in xb_set_bit(). > I think xb_find_set() has a bug in !node path. I think we can probably remove the "!node" path for now. It would be good to get the fundamental part in first, and leave optimization to come a...
2017 Dec 20
2
[PATCH v20 0/7] Virtio-balloon Enhancement
...n path and fold all xbitmap patches into one, and > post only one xbitmap patch without virtio-balloon changes? > > . > > I still think we don't need xb_preload()/xb_preload_end(). Why would you think preload is not needed? The bitmap is allocated via preload "bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap);", this allocated bitmap would be used in xb_set_bit(). > I think xb_find_set() has a bug in !node path. I think we can probably remove the "!node" path for now. It would be good to get the fundamental part in first, and leave optimization to come a...
2017 Dec 12
0
[PATCH v19 2/7] xbitmap: potential improvement
...ap) - return; + return false; + /* + * The per-CPU variable is updated with preemption enabled. + * If the calling task is unlucky to be scheduled to another + * CPU which has no ida_bitmap allocation, it will be detected + * when setting a bit (i.e. __xb_set_bit()). + */ bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); kfree(bitmap); } + + if (__radix_tree_preload(gfp, XB_PRELOAD_SIZE) < 0) + return false; + + return true; } EXPORT_SYMBOL(xb_preload); diff --git a/lib/xbitmap.c b/lib/xbitmap.c index 2b547a73..182aa29 100644 --- a/lib/xbitmap.c +++ b/lib/xbitmap.c @@ -39,8 +...
2017 Aug 03
0
[PATCH v13 1/5] Introduce xbitmap
...0 @@ int ida_pre_get(struct ida *ida, gfp_t gfp) } EXPORT_SYMBOL(ida_pre_get); +void xb_preload(gfp_t gfp) +{ + __radix_tree_preload(gfp, XB_PRELOAD_SIZE); + if (!this_cpu_read(ida_bitmap)) { + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); + + if (!bitmap) + return; + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } +} +EXPORT_SYMBOL(xb_preload); + +int xb_set_bit(struct xb *xb, unsigned long bit) +{ + int err; + unsigned long index = bit / IDA_BITMAP_BITS; + struct radix_tree_root *root = &xb->xbrt; + struct radix_tree_node *node; + void **slot; + struct...
2017 Dec 19
0
[PATCH v20 1/7] xbitmap: Introduce xbitmap
...0 @@ int ida_pre_get(struct ida *ida, gfp_t gfp) } EXPORT_SYMBOL(ida_pre_get); +void xb_preload(gfp_t gfp) +{ + __radix_tree_preload(gfp, XB_PRELOAD_SIZE); + if (!this_cpu_read(ida_bitmap)) { + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); + + if (!bitmap) + return; + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } +} +EXPORT_SYMBOL(xb_preload); + void __rcu **idr_get_free_cmn(struct radix_tree_root *root, struct radix_tree_iter *iter, gfp_t gfp, unsigned long max) diff --git a/lib/xbitmap.c b/lib/xbitmap.c new file mode 100644 index 000000...
2017 Dec 12
0
[PATCH v19 1/7] xbitmap: Introduce xbitmap
...0 @@ int ida_pre_get(struct ida *ida, gfp_t gfp) } EXPORT_SYMBOL(ida_pre_get); +void xb_preload(gfp_t gfp) +{ + __radix_tree_preload(gfp, XB_PRELOAD_SIZE); + if (!this_cpu_read(ida_bitmap)) { + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); + + if (!bitmap) + return; + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } +} +EXPORT_SYMBOL(xb_preload); + void __rcu **idr_get_free_cmn(struct radix_tree_root *root, struct radix_tree_iter *iter, gfp_t gfp, unsigned long max) diff --git a/lib/xbitmap.c b/lib/xbitmap.c new file mode 100644 index 000000...
2017 Nov 03
0
[PATCH v17 1/6] lib/xbitmap: Introduce xbitmap
...+ if (!bitmap) + return false; + /* + * The per-CPU variable is updated with preemption enabled. + * If the calling task is unlucky to be scheduled to another + * CPU which has no ida_bitmap allocation, it will be detected + * when setting a bit (i.e. __xb_set_bit()). + */ + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } + + if (__radix_tree_preload(gfp, XB_PRELOAD_SIZE) < 0) + return false; + + return true; +} +EXPORT_SYMBOL(xb_preload); + +/** * radix_tree_iter_delete - delete the entry at this iterator position * @root: radix tree root * @iter: iterator...
2017 Dec 21
0
[PATCH v20 3/7 RESEND] xbitmap: add more operations
...+ if (!bitmap) + return -ENOMEM; + /* + * The per-CPU variable is updated with preemption enabled. + * If the calling task is unlucky to be scheduled to another + * CPU which has no ida_bitmap allocation, it will be detected + * when setting a bit (i.e. xb_set_bit()). + */ + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } + + return __radix_tree_preload(gfp, XB_PRELOAD_SIZE); +} +EXPORT_SYMBOL(xb_preload); + void __rcu **idr_get_free_cmn(struct radix_tree_root *root, struct radix_tree_iter *iter, gfp_t gfp, unsigned long max) diff --git a/lib/xbit...
2018 Jan 09
0
[PATCH v21 1/5] xbitmap: Introduce xbitmap
...+ if (!bitmap) + return -ENOMEM; + /* + * The per-CPU variable is updated with preemption enabled. + * If the calling task is unlucky to be scheduled to another + * CPU which has no ida_bitmap allocation, it will be detected + * when setting a bit (i.e. xb_set_bit()). + */ + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); + kfree(bitmap); + } + + return __radix_tree_preload(gfp, XB_PRELOAD_SIZE); +} +EXPORT_SYMBOL(xb_preload); + void __rcu **idr_get_free_cmn(struct radix_tree_root *root, struct radix_tree_iter *iter, gfp_t gfp, unsigned long max) diff --git a/lib/xbit...
2017 Aug 03
12
[PATCH v13 0/5] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one by one; and 2) free_page_vq: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live migration needs to
2017 Aug 03
12
[PATCH v13 0/5] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one by one; and 2) free_page_vq: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live migration needs to
2017 Dec 19
15
[PATCH v20 0/7] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one array each time; and 2) free page block reporting: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live
2017 Dec 19
15
[PATCH v20 0/7] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one array each time; and 2) free page block reporting: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live
2017 Dec 12
21
[PATCH v19 0/7] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following new features: 1) fast ballooning: transfer ballooned pages between the guest and host in chunks using sgs, instead of one array each time; and 2) free page block reporting: a new virtqueue to report guest free pages to the host. The second feature can be used to accelerate live migration of VMs. Here are some details: Live