Displaying 19 results from an estimated 19 matches for "xb_preload_and_set_bit".
2017 Dec 12
0
[PATCH v19 3/7] xbitmap: add more operations
...ions(+), 1 deletion(-)
diff --git a/include/linux/xbitmap.h b/include/linux/xbitmap.h
index b4d8375..eddf0d5e 100644
--- a/include/linux/xbitmap.h
+++ b/include/linux/xbitmap.h
@@ -33,8 +33,14 @@ static inline void xb_init(struct xb *xb)
}
int xb_set_bit(struct xb *xb, unsigned long bit);
+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
bool xb_test_bit(const struct xb *xb, unsigned long bit);
-int xb_clear_bit(struct xb *xb, unsigned long bit);
+void xb_clear_bit(struct xb *xb, unsigned long bit);
+unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start,
+ unsigne...
2017 Nov 03
0
[PATCH v17 1/6] lib/xbitmap: Introduce xbitmap
...<mst at redhat.com>
Cc: Tetsuo Handa <penguin-kernel at I-love.SAKURA.ne.jp>
v16->v17 ChangeLog:
1) xb_preload: allocate ida bitmap before __radix_tree_preload() to avoid
kmalloc with preemption disabled. Also change this function to return with
preemption not disabled on error.
2) xb_preload_and_set_bit: a wrapper of xb_preload and xb_set_bit, for
the convenience of usage.
v15->v16 ChangeLog:
1) coding style - separate small functions for bit set/clear/test;
2) Clear a range of bits in a more efficient way:
A) clear a range of bits from the same ida bitmap directly rather than
search...
2017 Dec 15
4
[PATCH v19 3/7] xbitmap: add more operations
On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
I'm struggling to understand when one would use this. The xb_ API
requires you to handle your own locking. But specifying GFP flags
here implies you can sleep. So ... um ... there's no locking?
> +void xb_clear_bit_range(struct xb *xb, u...
2017 Dec 15
4
[PATCH v19 3/7] xbitmap: add more operations
On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
I'm struggling to understand when one would use this. The xb_ API
requires you to handle your own locking. But specifying GFP flags
here implies you can sleep. So ... um ... there's no locking?
> +void xb_clear_bit_range(struct xb *xb, u...
2017 Dec 12
21
[PATCH v19 0/7] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Dec 12
21
[PATCH v19 0/7] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Nov 29
22
[PATCH v18 00/10] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Nov 29
22
[PATCH v18 00/10] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Dec 01
1
[PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
...gt; + struct page *page,
> + unsigned long *pfn_min,
> + unsigned long *pfn_max)
> +{
> + unsigned long pfn = page_to_pfn(page);
> + int ret;
> +
> + *pfn_min = min(pfn, *pfn_min);
> + *pfn_max = max(pfn, *pfn_max);
> +
> + do {
> + ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
> + GFP_NOWAIT | __GFP_NOWARN);
> + } while (unlikely(ret == -EAGAIN));
what exactly does this loop do? Does this wait
forever until there is some free memory? why GFP_NOWAIT?
> +
> + return ret;
> +}
> +
> static unsigned fill_balloon(stru...
2017 Dec 01
1
[PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
...gt; + struct page *page,
> + unsigned long *pfn_min,
> + unsigned long *pfn_max)
> +{
> + unsigned long pfn = page_to_pfn(page);
> + int ret;
> +
> + *pfn_min = min(pfn, *pfn_min);
> + *pfn_max = max(pfn, *pfn_max);
> +
> + do {
> + ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
> + GFP_NOWAIT | __GFP_NOWARN);
> + } while (unlikely(ret == -EAGAIN));
what exactly does this loop do? Does this wait
forever until there is some free memory? why GFP_NOWAIT?
> +
> + return ret;
> +}
> +
> static unsigned fill_balloon(stru...
2017 Nov 03
0
[PATCH v17 2/6] radix tree test suite: add tests for xbitmap
...lot, bitmap, NULL,
+ NULL);
+ return 0;
+ }
+ bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (!bitmap)
+ return -EAGAIN;
+ memset(bitmap, 0, sizeof(*bitmap));
+ __radix_tree_replace(root, node, slot, bitmap, NULL, NULL);
+ }
+
+ __set_bit(bit, bitmap->bitmap);
+ return 0;
+}
+
+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp)
+{
+ int ret = 0;
+
+ if (!xb_preload(gfp))
+ return -ENOMEM;
+
+ ret = xb_set_bit(xb, bit);
+ xb_preload_end();
+
+ return ret;
+}
+
+bool xb_test_bit(struct xb *xb, unsigned long bit)
+{
+ unsigned long index = bit / IDA_BITMAP_BITS;
+ const struct ra...
2017 Dec 17
0
[PATCH v19 3/7] xbitmap: add more operations
On 12/16/2017 07:28 PM, Tetsuo Handa wrote:
> Wei Wang wrote:
>> On 12/16/2017 02:42 AM, Matthew Wilcox wrote:
>>> On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
>>>> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
>>> I'm struggling to understand when one would use this. The xb_ API
>>> requires you to handle your own locking. But specifying GFP flags
>>> here implies you can sleep. So ... um ... there's no locking?
>>...
2017 Nov 30
0
[PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
...unsigned long *pfn_min,
> > + unsigned long *pfn_max)
> > +{
> > + unsigned long pfn = page_to_pfn(page);
> > + int ret;
> > +
> > + *pfn_min = min(pfn, *pfn_min);
> > + *pfn_max = max(pfn, *pfn_max);
> > +
> > + do {
> > + ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
> > + GFP_NOWAIT | __GFP_NOWARN);
>
> It is a bit of pity that __GFP_NOWARN here is applied to only xb_preload().
> Memory allocation by xb_set_bit() will after all emit warnings. Maybe
>
> xb_init(&vb->page_xb);
> vb->page_...
2017 Nov 03
1
[PATCH v17 1/6] lib/xbitmap: Introduce xbitmap
I'm commenting without understanding the logic.
Wei Wang wrote:
> +
> +bool xb_preload(gfp_t gfp);
> +
Want __must_check annotation, for __radix_tree_preload() is marked
with __must_check annotation. By error failing to check result of
xb_preload() will lead to preemption kept disabled unexpectedly.
> +int xb_set_bit(struct xb *xb, unsigned long bit)
> +{
> + int err;
2017 Nov 03
1
[PATCH v17 1/6] lib/xbitmap: Introduce xbitmap
I'm commenting without understanding the logic.
Wei Wang wrote:
> +
> +bool xb_preload(gfp_t gfp);
> +
Want __must_check annotation, for __radix_tree_preload() is marked
with __must_check annotation. By error failing to check result of
xb_preload() will lead to preemption kept disabled unexpectedly.
> +int xb_set_bit(struct xb *xb, unsigned long bit)
> +{
> + int err;
2017 Nov 03
12
[PATCH v17 0/6] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Nov 03
12
[PATCH v17 0/6] Virtio-balloon Enhancement
This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one array each time; and
2) free page block reporting: a new virtqueue to report guest free pages
to the host.
The second feature can be used to accelerate live migration of VMs. Here
are some details:
Live
2017 Nov 29
0
[PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
...+static inline int xb_set_page(struct virtio_balloon *vb,
+ struct page *page,
+ unsigned long *pfn_min,
+ unsigned long *pfn_max)
+{
+ unsigned long pfn = page_to_pfn(page);
+ int ret;
+
+ *pfn_min = min(pfn, *pfn_min);
+ *pfn_max = max(pfn, *pfn_max);
+
+ do {
+ ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
+ GFP_NOWAIT | __GFP_NOWARN);
+ } while (unlikely(ret == -EAGAIN));
+
+ return ret;
+}
+
static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
{
unsigned num_allocated_pages;
unsigned num_pfns;
struct page *page;
LIST_HEAD(pages);
+ bool us...
2017 Nov 03
0
[PATCH v17 4/6] virtio-balloon: VIRTIO_BALLOON_F_SG
...+static inline int xb_set_page(struct virtio_balloon *vb,
+ struct page *page,
+ unsigned long *pfn_min,
+ unsigned long *pfn_max)
+{
+ unsigned long pfn = page_to_pfn(page);
+ int ret;
+
+ *pfn_min = min(pfn, *pfn_min);
+ *pfn_max = max(pfn, *pfn_max);
+
+ do {
+ ret = xb_preload_and_set_bit(&vb->page_xb, pfn, GFP_KERNEL);
+ } while (unlikely(ret == -EAGAIN));
+
+ return ret;
+}
+
static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
{
unsigned num_allocated_pages;
unsigned int num_pfns;
struct page *page;
LIST_HEAD(pages);
+ bool use_sg = virtio_has_feat...