search for: list_for_each_entry_saf

Displaying 20 results from an estimated 823 matches for "list_for_each_entry_saf".

2012 Mar 29
0
[PATCH v2 2/2] m2p_find_override: use list_for_each_entry_safe
Use list_for_each_entry_safe and remove the spin_lock acquisition in m2p_find_override: getting stale entries is OK because we should never get an m2p_find_override call looking for an entry that we are about to add or delete. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> --- arch/x86/xen/p2m.c...
2013 Sep 01
0
[PATCH] btrfs: use list_for_each_entry_safe() when delete items
Replace list_for_each_entry() by list_for_each_entry_safe() in __btrfs_close_devices() list_for_each_entry() { list_replace_rcu(); call_rcu(); <-- We may free the device, if we get next device by the current one, the page fault may happen. }...
2007 May 17
1
[PATCH] ocfs: use list_for_each_entry where benefical
...t_head *iter, *tmp; + struct o2net_status_wait *nsw, *tmp; unsigned int num_kills = 0; - struct o2net_status_wait *nsw; assert_spin_locked(&nn->nn_lock); - list_for_each_safe(iter, tmp, &nn->nn_status_list) { - nsw = list_entry(iter, struct o2net_status_wait, ns_node_item); + list_for_each_entry_safe(nsw, tmp, &nn->nn_status_list, ns_node_item) { o2net_complete_nsw_locked(nn, nsw, O2NET_ERR_DIED, 0); num_kills++; } @@ -764,13 +762,10 @@ EXPORT_SYMBOL_GPL(o2net_register_handler void o2net_unregister_handler_list(struct list_head *list) { - struct list_head *pos, *n; - struct...
2023 May 31
11
[Bug 1685] New: Calling the nftnl_set_free function may trigger the "double free" problem.
https://bugzilla.netfilter.org/show_bug.cgi?id=1685 Bug ID: 1685 Summary: Calling the nftnl_set_free function may trigger the "double free" problem. Product: libnftnl Version: unspecified Hardware: All OS: All Status: NEW Severity: critical Priority: P5
2016 Dec 12
3
[PATCH v4 2/4] vhost-vsock: add pkt cancel capability
...t vhost_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; + LIST_HEAD(freeme); + + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + if (!vsock) + return -ENODEV; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { + if (pkt->cancel_token != vsk) + continue; + list_move(&pkt->list, &freeme); + } + spin_unlock_bh(&vsock->send_pkt_list_lock); + + list_for_each_entry_safe(pkt, n, &freeme, list) { + if (pkt->reply) + cnt++; + list_...
2016 Dec 12
3
[PATCH v4 2/4] vhost-vsock: add pkt cancel capability
...t vhost_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; + LIST_HEAD(freeme); + + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + if (!vsock) + return -ENODEV; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { + if (pkt->cancel_token != vsk) + continue; + list_move(&pkt->list, &freeme); + } + spin_unlock_bh(&vsock->send_pkt_list_lock); + + list_for_each_entry_safe(pkt, n, &freeme, list) { + if (pkt->reply) + cnt++; + list_...
2014 Jun 17
1
[PATCH] drm/nouveau: fix oops in display destructor with headless cards
...uveau/core/engine/disp/base.c index c41f656abe64..9c38c5e40500 100644 --- a/drivers/gpu/drm/nouveau/core/engine/disp/base.c +++ b/drivers/gpu/drm/nouveau/core/engine/disp/base.c @@ -99,8 +99,10 @@ _nouveau_disp_dtor(struct nouveau_object *object) nouveau_event_destroy(&disp->vblank); - list_for_each_entry_safe(outp, outt, &disp->outp, head) { - nouveau_object_ref(NULL, (struct nouveau_object **)&outp); + if (disp->outp.next) { + list_for_each_entry_safe(outp, outt, &disp->outp, head) { + nouveau_object_ref(NULL, (struct nouveau_object **)&outp); + } } nouveau_engine_d...
2016 Dec 07
1
[PATCH 3/4] vsock: add pkt cancel capability
..._cancel_pkt(struct vsock_sock *vsk) +{ + struct virtio_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; + LIST_HEAD(freeme); + + vsock = virtio_vsock_get(); + if (!vsock) { + return -ENODEV; + } + + if (pkt->reply) + cnt++; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { + if (pkt->vsk != vsk) + continue; + list_move(&pkt->list, &freeme); + } + spin_unlock_bh(&vsock->send_pkt_list_lock); + + list_for_each_entry_safe(pkt, n, &freeme, list) { + if (pkt->reply) + cnt++; + list_del(&...
2016 Dec 07
7
[PATCH 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Dec 07
7
[PATCH 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
On Mon, Dec 28, 2015 at 08:35:13AM +0900, Minchan Kim wrote: > In balloon_page_dequeue, pages_lock should cover the loop > (ie, list_for_each_entry_safe). Otherwise, the cursor page could > be isolated by compaction and then list_del by isolation could > poison the page->lru.{prev,next} so the loop finally could > access wrong address like this. This patch fixes the bug. > > general protection fault: 0000 [#1] SMP > Dumping f...
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
On Mon, Dec 28, 2015 at 08:35:13AM +0900, Minchan Kim wrote: > In balloon_page_dequeue, pages_lock should cover the loop > (ie, list_for_each_entry_safe). Otherwise, the cursor page could > be isolated by compaction and then list_del by isolation could > poison the page->lru.{prev,next} so the loop finally could > access wrong address like this. This patch fixes the bug. > > general protection fault: 0000 [#1] SMP > Dumping f...
2016 Dec 08
6
[PATCH v3 2/4] vhost-vsock: add pkt cancel capability
...t vhost_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; + LIST_HEAD(freeme); + + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + if (!vsock) + return -ENODEV; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { + if (pkt->cancel_token != (void *)vsk) + continue; + list_move(&pkt->list, &freeme); + } + spin_unlock_bh(&vsock->send_pkt_list_lock); + + list_for_each_entry_safe(pkt, n, &freeme, list) { + if (pkt->reply) + cnt++;...
2016 Dec 08
6
[PATCH v3 2/4] vhost-vsock: add pkt cancel capability
...t vhost_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; + LIST_HEAD(freeme); + + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + if (!vsock) + return -ENODEV; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { + if (pkt->cancel_token != (void *)vsk) + continue; + list_move(&pkt->list, &freeme); + } + spin_unlock_bh(&vsock->send_pkt_list_lock); + + list_for_each_entry_safe(pkt, n, &freeme, list) { + if (pkt->reply) + cnt++;...
2019 Dec 18
7
[PATCH v3 0/5] iommu: Implement generic_iommu_put_resv_regions()
From: Thierry Reding <treding at nvidia.com> Most IOMMU drivers only need to free the memory allocated for each reserved region. Instead of open-coding the loop to do this in each driver, extract the code into a common function that can be used by all these drivers. Changes in v3: - add Reviewed-by from Jean-Philippe Brucker on virtio patch - add Acked-by from Will Deacon on ARM SMMU patch
2006 Aug 02
2
[PATCH][RFC] permit domU userspace to watch xenstore
...free_watch_adapter(watch); + return err; + } + + list_add(&watch->list, &u->watches); + + hdr.type = XS_WATCH; + hdr.len = strlen(XS_WATCH_RESP) + 1; + queue_reply(u, (char *)&hdr, sizeof(hdr)); + queue_reply(u, (char *)XS_WATCH_RESP, hdr.len); + } else { + list_for_each_entry_safe(watch, tmp_watch, + &u->watches, list) { + if (!strcmp(watch->token, token) && + !strcmp(watch->watch.node, path)) + break; + { + unregister_xenbus_watch(&watch->watch); + list_del(&watch->l...
2019 Dec 09
8
[PATCH v2 0/5] iommu: Implement iommu_put_resv_regions_simple()
From: Thierry Reding <treding at nvidia.com> Most IOMMU drivers only need to free the memory allocated for each reserved region. Instead of open-coding the loop to do this in each driver, extract the code into a common function that can be used by all these drivers. Changes in v2: - change subject prefix to "iommu: virtio: " for virtio-iommu.c driver Thierry Thierry Reding (5):
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).
2019 Apr 24
1
[PATCH v3 1/4] mm/balloon_compaction: list interfaces
...ed. > + */ > +size_t balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, > + struct list_head *pages) > +{ > + struct page *page, *tmp; > + unsigned long flags; > + size_t n_pages = 0; > + > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > + list_for_each_entry_safe(page, tmp, pages, lru) { > + balloon_page_enqueue_one(b_dev_info, page); > + n_pages++; > + } > + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > + return n_pages; > +} > +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); > + > +/** > + * balloon_page_l...