Displaying 20 results from an estimated 52 matches for "nr_free".
Did you mean:
i_free
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
...cling through them every
> + * time we steal is a bit easier and more or less equivalent:
> + */
> + unsigned cpu_last_stolen;
> +
> + /* For sleeping on allocation failure */
> + wait_queue_head_t wait;
> +
> + /*
> + * Global freelist - it's a stack where nr_free points to the
> + * top
> + */
> + unsigned nr_free;
> + unsigned *freelist;
> + } ____cacheline_aligned_in_smp;
Why the ____cacheline_aligned_in_smp?
> +};
>
> ...
>
> +
> +/* Percpu IDA */
> +
> +/*
> + * Number of tags we move between the perc...
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
...cling through them every
> + * time we steal is a bit easier and more or less equivalent:
> + */
> + unsigned cpu_last_stolen;
> +
> + /* For sleeping on allocation failure */
> + wait_queue_head_t wait;
> +
> + /*
> + * Global freelist - it's a stack where nr_free points to the
> + * top
> + */
> + unsigned nr_free;
> + unsigned *freelist;
> + } ____cacheline_aligned_in_smp;
Why the ____cacheline_aligned_in_smp?
> +};
>
> ...
>
> +
> +/* Percpu IDA */
> +
> +/*
> + * Number of tags we move between the perc...
2013 Aug 16
0
[PATCH-v3 1/4] idr: Percpu ida
...)),
+ * we want to pick a cpu at random. Cycling through them every
+ * time we steal is a bit easier and more or less equivalent:
+ */
+ unsigned cpu_last_stolen;
+
+ /* For sleeping on allocation failure */
+ wait_queue_head_t wait;
+
+ /*
+ * Global freelist - it's a stack where nr_free points to the
+ * top
+ */
+ unsigned nr_free;
+ unsigned *freelist;
+ } ____cacheline_aligned_in_smp;
+};
+
+int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp);
+void percpu_ida_free(struct percpu_ida *pool, unsigned tag);
+
+void percpu_ida_destroy(struct percpu_ida *pool);
+int per...
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This is an updated series for adding tag pre-allocation support of
target fabric descriptor memory, utilizing Kent's latest per-cpu ida
bits here, along with Christoph Lameter's latest comments:
[PATCH 04/10] idr: Percpu ida
http://marc.info/?l=linux-kernel&m=137160026006974&w=2
The first patch is a
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This is an updated series for adding tag pre-allocation support of
target fabric descriptor memory, utilizing Kent's latest per-cpu ida
bits here, along with Christoph Lameter's latest comments:
[PATCH 04/10] idr: Percpu ida
http://marc.info/?l=linux-kernel&m=137160026006974&w=2
The first patch is a
2013 Aug 21
1
[PATCH-v3 1/4] idr: Percpu ida
On Fri, 16 Aug 2013, Nicholas A. Bellinger wrote:
> + spinlock_t lock;
Remove the spinlock.
> + unsigned nr_free;
> + unsigned freelist[];
> +};
> +
> +static inline void move_tags(unsigned *dst, unsigned *dst_nr,
> + unsigned *src, unsigned *src_nr,
> + unsigned nr)
> +{
> + *src_nr -= nr;
> + memcpy(dst + *dst_nr, src + *src_nr, sizeof(unsigned) * nr);
> + *dst_...
2013 Aug 21
1
[PATCH-v3 1/4] idr: Percpu ida
On Fri, 16 Aug 2013, Nicholas A. Bellinger wrote:
> + spinlock_t lock;
Remove the spinlock.
> + unsigned nr_free;
> + unsigned freelist[];
> +};
> +
> +static inline void move_tags(unsigned *dst, unsigned *dst_nr,
> + unsigned *src, unsigned *src_nr,
> + unsigned nr)
> +{
> + *src_nr -= nr;
> + memcpy(dst + *dst_nr, src + *src_nr, sizeof(unsigned) * nr);
> + *dst_...
2013 Aug 28
0
[PATCH-v3 1/4] idr: Percpu ida
...we steal is a bit easier and more or less equivalent:
> > + */
> > + unsigned cpu_last_stolen;
> > +
> > + /* For sleeping on allocation failure */
> > + wait_queue_head_t wait;
> > +
> > + /*
> > + * Global freelist - it's a stack where nr_free points to the
> > + * top
> > + */
> > + unsigned nr_free;
> > + unsigned *freelist;
> > + } ____cacheline_aligned_in_smp;
>
> Why the ____cacheline_aligned_in_smp?
It's separating the RW stuff that isn't always touched from the RO stuff
that...
2013 Aug 28
2
[PATCH-v3 1/4] idr: Percpu ida
...> > > + * cpus_have_tags
> > > + *
> > > + * global lock held and irqs disabled, don't need percpu lock
> > > + */
> > > + prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
> > > +
> > > + if (!tags->nr_free)
> > > + alloc_global_tags(pool, tags);
> > > + if (!tags->nr_free)
> > > + steal_tags(pool, tags);
> > > +
> > > + if (tags->nr_free) {
> > > + tag = tags->freelist[--tags->nr_free];
> > > + if (tags->nr_free)...
2013 Aug 28
2
[PATCH-v3 1/4] idr: Percpu ida
...> > > + * cpus_have_tags
> > > + *
> > > + * global lock held and irqs disabled, don't need percpu lock
> > > + */
> > > + prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
> > > +
> > > + if (!tags->nr_free)
> > > + alloc_global_tags(pool, tags);
> > > + if (!tags->nr_free)
> > > + steal_tags(pool, tags);
> > > +
> > > + if (tags->nr_free) {
> > > + tag = tags->freelist[--tags->nr_free];
> > > + if (tags->nr_free)...
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not
as spectacular as deleting an entire architecture, but it's still a
worthy reduction in lines of code.
Untested due to lack of hardware and not understanding how to set up a
target platform.
Changes from v1:
- Fixed bugs pointed out by Jens in iscsit_wait_for_tag()
- Abstracted out tag freeing as requested by Bart
2018 Jun 12
8
[PATCH 0/3] Use sbitmap instead of percpu_ida
Removing the percpu_ida code nets over 400 lines of removal. It's not
as spectacular as deleting an entire architecture, but it's still a
worthy reduction in lines of code.
Untested due to lack of hardware and not understanding how to set up a
target platform.
Changes from v1:
- Fixed bugs pointed out by Jens in iscsit_wait_for_tag()
- Abstracted out tag freeing as requested by Bart
2018 May 15
6
[PATCH 0/2] Use sbitmap instead of percpu_ida
From: Matthew Wilcox <mawilcox at microsoft.com>
This is a pretty rough-and-ready conversion of the target drivers
from using percpu_ida to sbitmap. It compiles; I don't have a target
setup, so it's completely untested. I haven't tried to do anything
particularly clever here, so it's possible that, for example, the wait
queue in iscsi_target_util could be more clever, like
2013 Aug 28
0
[PATCH] percpu ida: Switch to cpumask_t, add some comments
...021c 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -1178,7 +1178,13 @@ EXPORT_SYMBOL(ida_init);
#define IDA_PCPU_SIZE ((IDA_PCPU_BATCH_MOVE * 3) / 2)
struct percpu_ida_cpu {
+ /*
+ * Even though this is percpu, we need a lock for tag stealing by remote
+ * CPUs:
+ */
spinlock_t lock;
+
+ /* nr_free/freelist form a stack of free IDs */
unsigned nr_free;
unsigned freelist[];
};
@@ -1209,21 +1215,21 @@ static inline void steal_tags(struct percpu_ida *pool,
unsigned cpus_have_tags, cpu = pool->cpu_last_stolen;
struct percpu_ida_cpu *remote;
- for (cpus_have_tags = bitmap_weight(...
2013 Aug 28
0
[PATCH-v3 1/4] idr: Percpu ida
..._tags
> > > > + *
> > > > + * global lock held and irqs disabled, don't need percpu lock
> > > > + */
> > > > + prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
> > > > +
> > > > + if (!tags->nr_free)
> > > > + alloc_global_tags(pool, tags);
> > > > + if (!tags->nr_free)
> > > > + steal_tags(pool, tags);
> > > > +
> > > > + if (tags->nr_free) {
> > > > + tag = tags->freelist[--tags->nr_free];
> > &...
2016 Dec 05
1
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
...;> + struct list_head *curr;
>>> + struct page_info_item *info;
>>> +
>>> + if (zone_is_empty(zone))
>>> + return 0;
>>> +
>>> + spin_lock_irqsave(&zone->lock, flags);
>>> +
>>> + if (*pos + zone->free_area[order].nr_free > size)
>>> + return -ENOSPC;
>>
>> Urg, so this won't partially fill? So, what the nr_free pages limit where we no
>> longer fit in the kmalloc()'d buffer where this simply won't work?
>
> Yes. My initial implementation is partially fill, it'...
2016 Dec 05
1
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
...;> + struct list_head *curr;
>>> + struct page_info_item *info;
>>> +
>>> + if (zone_is_empty(zone))
>>> + return 0;
>>> +
>>> + spin_lock_irqsave(&zone->lock, flags);
>>> +
>>> + if (*pos + zone->free_area[order].nr_free > size)
>>> + return -ENOSPC;
>>
>> Urg, so this won't partially fill? So, what the nr_free pages limit where we no
>> longer fit in the kmalloc()'d buffer where this simply won't work?
>
> Yes. My initial implementation is partially fill, it'...
2016 Nov 30
2
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
...> +{
> + unsigned long pfn, flags;
> + unsigned int t;
> + struct list_head *curr;
> + struct page_info_item *info;
> +
> + if (zone_is_empty(zone))
> + return 0;
> +
> + spin_lock_irqsave(&zone->lock, flags);
> +
> + if (*pos + zone->free_area[order].nr_free > size)
> + return -ENOSPC;
Urg, so this won't partially fill? So, what the nr_free pages limit
where we no longer fit in the kmalloc()'d buffer where this simply won't
work?
> + for (t = 0; t < MIGRATE_TYPES; t++) {
> + list_for_each(curr, &zone->free_area[or...
2016 Nov 30
2
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
...> +{
> + unsigned long pfn, flags;
> + unsigned int t;
> + struct list_head *curr;
> + struct page_info_item *info;
> +
> + if (zone_is_empty(zone))
> + return 0;
> +
> + spin_lock_irqsave(&zone->lock, flags);
> +
> + if (*pos + zone->free_area[order].nr_free > size)
> + return -ENOSPC;
Urg, so this won't partially fill? So, what the nr_free pages limit
where we no longer fit in the kmalloc()'d buffer where this simply won't
work?
> + for (t = 0; t < MIGRATE_TYPES; t++) {
> + list_for_each(curr, &zone->free_area[or...
2018 Jun 26
2
[PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
...t; > > > of these.
> > > That wouldn't be a good choice for us. If we check how the regular
> > > allocation works, there are many many things we need to consider when pages
> > > are allocated to users.
> > > For example, we need to take care of the nr_free
> > > counter, we need to check the watermark and perform the related actions.
> > > Also the folks working on arch_alloc_page to monitor page allocation
> > > activities would get a surprise..if page allocation is allowed to work in
> > > this way.
> > >...