search for: list_next_entry

Displaying 20 results from an estimated 49 matches for "list_next_entry".

2018 Mar 19
4
[PATCH] gpu: drm: Use list_{next/prev}_entry instead of list_entry
This patch replace list_entry with list_{next/prev}_entry as it makes the code more clear to read. Done using coccinelle: @@ expression e1; identifier e3; type t; @@ ( - list_entry(e1->e3.next,t,e3) + list_next_entry(e1,e3) | - list_entry(e1->e3.prev,t,e3) + list_prev_entry(e1,e3) ) Signed-off-by: Arushi Singhal <arushisinghal19971997 at gmail.com> --- drivers/gpu/drm/drm_lease.c | 2 +- drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c | 2 +- 2 files changed, 2 insertions(+), 2 dele...
2018 Mar 25
2
[Outreachy kernel] [PATCH] gpu: drm: Use list_{next/prev}_entry instead of list_entry
...lace list_entry with list_{next/prev}_entry as it makes > > the code more clear to read. > > Done using coccinelle: > > > > @@ > > expression e1; > > identifier e3; > > type t; > > @@ > > ( > > - list_entry(e1->e3.next,t,e3) > > + list_next_entry(e1,e3) > > | > > - list_entry(e1->e3.prev,t,e3) > > + list_prev_entry(e1,e3) > > ) > > This looks like a rule that could be nice for the Linux kernel in general, > because the code really is much simpler. > > I would suggest to write the rule in a more robu...
2023 Jul 07
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...t__), (end__) - 1) but then I changed it since I did not want to expose the interval tree functions directly. > >> + va__ && (va__->va.addr < (end__)) && \ >> + !list_entry_is_head(va__, &(mgr__)->rb.list, rb.entry); \ >> + va__ = list_next_entry(va__, rb.entry)) > > If you define: > > static inline struct drm_gpuva * > drm_gpuva_next(struct drm_gpuva *va) > { > if (va && !list_is_last(&va->rb.entry, &va->mgr->rb.list)) > return list_next_entry(va, rb.entry); > > return NULL; &gt...
2018 Mar 19
0
[Outreachy kernel] [PATCH] gpu: drm: Use list_{next/prev}_entry instead of list_entry
...19 Mar 2018, Arushi Singhal wrote: > This patch replace list_entry with list_{next/prev}_entry as it makes > the code more clear to read. > Done using coccinelle: > > @@ > expression e1; > identifier e3; > type t; > @@ > ( > - list_entry(e1->e3.next,t,e3) > + list_next_entry(e1,e3) > | > - list_entry(e1->e3.prev,t,e3) > + list_prev_entry(e1,e3) > ) This looks like a rule that could be nice for the Linux kernel in general, because the code really is much simpler. I would suggest to write the rule in a more robust way, as follows: @@ identifier e3; type...
2018 Mar 19
0
[Outreachy kernel] [PATCH] gpu: drm: Use list_{next/prev}_entry instead of list_entry
...:30AM +0530, Arushi Singhal wrote: > This patch replace list_entry with list_{next/prev}_entry as it makes > the code more clear to read. > Done using coccinelle: > > @@ > expression e1; > identifier e3; > type t; > @@ > ( > - list_entry(e1->e3.next,t,e3) > + list_next_entry(e1,e3) > | > - list_entry(e1->e3.prev,t,e3) > + list_prev_entry(e1,e3) > ) > > Signed-off-by: Arushi Singhal <arushisinghal19971997 at gmail.com> Thanks for your patch. Looks correct, but for merge technical reasons can you please split it into 2 patches? One for drm_le...
2018 Mar 25
4
[PATCH v2 0/2] drm: Replace list_entry
Replace list_entry with list_{next/prev}_entry. Arushi Singhal (2): gpu: drm/lease:: Use list_{next/prev}_entry instead of list_entry gpu: drm: nouveau: Use list_{next/prev}_entry instead of list_entry drivers/gpu/drm/drm_lease.c | 2 +- drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) -- changes in v2 *All the
2019 Sep 23
1
[RFC] VSOCK: add support for MSG_PEEK
...> + spin_lock_bh(&vvs->rx_lock); > + > + total += bytes; Using list_for_each_entry(), here we can just do: (or better, at the beginning of the cycle) if (total == len) break; removing the next part... > + off += bytes; > + if (off == pkt->len) { > + pkt = list_next_entry(pkt, list); > + off = 0; > + } > + } while ((total < len) && !list_is_first(&pkt->list, &vvs->rx_queue)); ...until here. > + > + spin_unlock_bh(&vvs->rx_lock); > + > + return total; > + > +out: > + if (total) > + err = total; &g...
2017 Jul 14
4
[PATCH v12 6/8] mm: support reporting free page blocks
...> + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { >...
2017 Jul 14
4
[PATCH v12 6/8] mm: support reporting free page blocks
...> + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { >...
2018 Mar 25
0
[PATCH v2 1/2] gpu: drm/lease:: Use list_{next/prev}_entry instead of list_entry
...index 1402c0e..4dcfb5f 100644 --- a/drivers/gpu/drm/drm_lease.c +++ b/drivers/gpu/drm/drm_lease.c @@ -340,7 +340,7 @@ static void _drm_lease_revoke(struct drm_master *top) break; /* Over */ - master = list_entry(master->lessee_list.next, struct drm_master, lessee_list); + master = list_next_entry(master, lessee_list); } } } -- 2.7.4
2018 Feb 20
0
[PATCH 1/4] iommu: Add virtio-iommu driver
...written = len; >> + >> + if (++nr_received == nr_sent) { >> + WARN_ON(!list_is_last(&pending->list, sent)); >> + break; >> + } else if (WARN_ON(list_is_last(&pending->list, sent))) { >> + break; >> + } >> + >> + pending = list_next_entry(pending, list); > > We should remove current element from the pending list. There is no > guarantee we get response for each while loop so when we get back for > more the _viommu_send_reqs_sync() caller will pass pointer to the out of > date head next time. Right, I'll fix t...
2017 Apr 13
0
[PATCH v9 3/5] mm: function to offer a page block on the free list
...r has been the last page block + * on the list. + */ + if ((*page)->lru.next == this_list) { + *page = NULL; + ret = 1; + goto out; + } + + /** + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); + ret = 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(inquire_unused_page_block); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
2017 Jun 09
0
[PATCH v11 4/6] mm: function to offer a page block on the free list
...er has been the last page block + * on the list. + */ + if ((*page)->lru.next == this_list) { + *page = NULL; + ret = 1; + goto out; + } + + /* + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); + ret = 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(report_unused_page_block); + +#endif + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
2017 May 04
0
[PATCH v10 4/6] mm: function to offer a page block on the free list
...r has been the last page block + * on the list. + */ + if ((*page)->lru.next == this_list) { + *page = NULL; + ret = 1; + goto out; + } + + /** + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); + ret = 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(report_unused_page_block); + +#endif + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
2017 Jul 12
0
[PATCH v12 6/8] mm: support reporting free page blocks
...er has been the last page block + * on the list. + */ + if ((*page)->lru.next == this_list) { + *page = NULL; + ret = 1; + goto out; + } + + /* + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); + ret = 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(report_unused_page_block); + +#endif + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
2017 Aug 03
0
[PATCH v13 4/5] mm: support reporting free page blocks
...been the last page block + * on the list. + */ + if ((*page)->lru.next == free_list) { + *page = NULL; + ret = -EAGAIN; + goto out; + } + + /* + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); +out: + spin_unlock_irqrestore(&zone->lock, flags); + return ret; +} + +/* + * Walk through the free page blocks in the system. The @visit callback is + * invoked to handle each free page block. + * + * Note: some page blocks may be used after the report function returns, so i...
2018 Jul 10
0
[PATCH v35 1/5] mm: support to get hints of free page blocks
...; MIGRATE_TYPES; mt++) { + free_list = &zone->free_area[order].free_list[mt]; + list_for_each_entry(free_page, free_list, lru) { + addr = page_to_pfn(free_page) << PAGE_SHIFT; + /* This buffer is full, so use the next one */ + if (entry_index == entries) { + buf_page = list_next_entry(buf_page, + lru); + /* All the buffers are consumed */ + if (!buf_page) { + spin_unlock_irq(&zone->lock); + *loaded_num = used_buf_num * + entries; + return -ENOSPC; + } + buf = (__le64 *)page_address(buf_page); + entry_index = 0; +...
2017 Jul 14
0
[PATCH v12 6/8] mm: support reporting free page blocks
...gt; > + *page = NULL; > > + ret = 1; > > + goto out; > > + } > > + > > + /* > > + * Finally, fall into the regular case: the page block passed from the > > + * caller is still on the free list. Offer the next one. > > + */ > > + *page = list_next_entry((*page), lru); > > + ret = 0; > > +out: > > + spin_unlock_irqrestore(&this_zone->lock, flags); > > + return ret; > > +} > > +EXPORT_SYMBOL(report_unused_page_block); > > + > > +#endif > > + > > static void zoneref_set_zone(struct z...
2017 Jul 14
0
[PATCH v12 6/8] mm: support reporting free page blocks
...gt; > + *page = NULL; > > + ret = 1; > > + goto out; > > + } > > + > > + /* > > + * Finally, fall into the regular case: the page block passed from the > > + * caller is still on the free list. Offer the next one. > > + */ > > + *page = list_next_entry((*page), lru); > > + ret = 0; > > +out: > > + spin_unlock_irqrestore(&this_zone->lock, flags); > > + return ret; > > +} > > +EXPORT_SYMBOL(report_unused_page_block); > > + > > +#endif > > + > > static void zoneref_set_zone(struct z...
2017 Jul 13
1
[PATCH v12 6/8] mm: support reporting free page blocks
...> + if ((*page)->lru.next == this_list) { > + *page = NULL; > + ret = 1; > + goto out; > + } > + > + /* > + * Finally, fall into the regular case: the page block passed from the > + * caller is still on the free list. Offer the next one. > + */ > + *page = list_next_entry((*page), lru); > + ret = 0; > +out: > + spin_unlock_irqrestore(&this_zone->lock, flags); > + return ret; > +} > +EXPORT_SYMBOL(report_unused_page_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { >...