search for: list_splic

Displaying 20 results from an estimated 30 matches for "list_splic".

Did you mean: list_splice
2016 Jan 09
1
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...e); > > dequeued_page = true; > > break; > ^^^^[1] > > > } > > + put_page(page); > > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > > } > > > > + /* re-add remaining entries */ > > + list_splice(&processed, &b_dev_info->pages); > > By breaking the loop at its ordinary and expected way-out case [1] > we'll hit list_splice without holding b_dev_info->pages_lock, won't we? Ouch. right. > perhaps by adding the following on top of your patch we can address...
2016 Jan 09
1
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...e); > > dequeued_page = true; > > break; > ^^^^[1] > > > } > > + put_page(page); > > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > > } > > > > + /* re-add remaining entries */ > > + list_splice(&processed, &b_dev_info->pages); > > By breaking the loop at its ordinary and expected way-out case [1] > we'll hit list_splice without holding b_dev_info->pages_lock, won't we? Ouch. right. > perhaps by adding the following on top of your patch we can address...
2016 Jan 08
0
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...ags); > unlock_page(page); > + put_page(page); > dequeued_page = true; > break; ^^^^[1] > } > + put_page(page); > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > } > > + /* re-add remaining entries */ > + list_splice(&processed, &b_dev_info->pages); By breaking the loop at its ordinary and expected way-out case [1] we'll hit list_splice without holding b_dev_info->pages_lock, won't we? perhaps by adding the following on top of your patch we can address that pickle aforementioned: Chee...
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...vm_event(BALLOON_DEFLATE); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); unlock_page(page); + put_page(page); dequeued_page = true; break; } + put_page(page); + spin_lock_irqsave(&b_dev_info->pages_lock, flags); } + /* re-add remaining entries */ + list_splice(&processed, &b_dev_info->pages); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + if (!dequeued_page) { /* * If we are unable to dequeue a balloon page because the page
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...vm_event(BALLOON_DEFLATE); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); unlock_page(page); + put_page(page); dequeued_page = true; break; } + put_page(page); + spin_lock_irqsave(&b_dev_info->pages_lock, flags); } + /* re-add remaining entries */ + list_splice(&processed, &b_dev_info->pages); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + if (!dequeued_page) { /* * If we are unable to dequeue a balloon page because the page
2019 Oct 28
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
...rmn->lock); - - while ((it = interval_tree_iter_first(&rmn->objects, addr, end))) { - kfree(node); - node = container_of(it, struct radeon_mn_node, it); - interval_tree_remove(&node->it, &rmn->objects); - addr = min(it->start, addr); - end = max(it->last, end); - list_splice(&node->bos, &bos); - } - - if (!node) { - node = kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL); - if (!node) { - mutex_unlock(&rmn->lock); - return -ENOMEM; - } - } - - bo->mn = rmn; - - node->it.start = addr; - node->it.last = end; - INIT_LIST_HEAD(&node...
2019 Oct 29
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
...= interval_tree_iter_first(&rmn->objects, addr, end))) { > - kfree(node); > - node = container_of(it, struct radeon_mn_node, it); > - interval_tree_remove(&node->it, &rmn->objects); > - addr = min(it->start, addr); > - end = max(it->last, end); > - list_splice(&node->bos, &bos); > - } > - > - if (!node) { > - node = kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL); > - if (!node) { > - mutex_unlock(&rmn->lock); > - return -ENOMEM; > - } > - } > - > - bo->mn = rmn; > - > - node->it.s...
2012 Jun 12
1
[PATCH v2] block: Drop dead function blk_abort_queue()
...t a request based block device, nothing to abort - */ - if (!q->request_fn) - return; - - spin_lock_irqsave(q->queue_lock, flags); - - elv_abort_queue(q); - - /* - * Splice entries to local list, to avoid deadlocking if entries - * get readded to the timeout list by error handling - */ - list_splice_init(&q->timeout_list, &list); - - list_for_each_entry_safe(rq, tmp, &list, timeout_list) - blk_abort_request(rq); - - /* - * Occasionally, blk_abort_request() will return without - * deleting the element from the list. Make sure we add those back - * instead of leaving them on...
2016 Jan 01
0
[PATCH RFC] balloon: fix page list locking
...vm_event(BALLOON_DEFLATE); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); unlock_page(page); + put_page(page); dequeued_page = true; break; } + put_page(page); + spin_lock_irqsave(&b_dev_info->pages_lock, flags); } + /* re-add remaining entries */ + list_splice(&processed, &b_dev_info->pages); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + if (!dequeued_page) { /* * If we are unable to dequeue a balloon page because the page -- MST
2016 Jan 01
0
[PATCH RFC] balloon: fix page list locking
...vm_event(BALLOON_DEFLATE); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); unlock_page(page); + put_page(page); dequeued_page = true; break; } + put_page(page); + spin_lock_irqsave(&b_dev_info->pages_lock, flags); } + /* re-add remaining entries */ + list_splice(&processed, &b_dev_info->pages); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + if (!dequeued_page) { /* * If we are unable to dequeue a balloon page because the page -- MST
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...= interval_tree_iter_first(&amn->objects, addr, end))) { > - kfree(node); > - node = container_of(it, struct amdgpu_mn_node, it); > - interval_tree_remove(&node->it, &amn->objects); > - addr = min(it->start, addr); > - end = max(it->last, end); > - list_splice(&node->bos, &bos); > - } > - > - if (!node) > - node = new_node; > + if (bo->kfd_bo) > + bo->notifier.ops = &amdgpu_mn_hsa_ops; > else > - kfree(new_node); > - > - bo->mn = amn; > - > - node->it.start = addr; > - node->it.l...
2008 Jan 08
1
[PATCH] kvm guest balloon driver
...de->bpage) { + kfree(node); + goto out_free; + } + + list_add(&node->bp_list, &tmp_list); + allocated++; + *pfn = page_to_pfn(node->bpage); + pfn++; + } + + r = send_balloon_buf(CMD_BALLOON_INFLATE, buf); + if (r) + goto out_free; + + spin_lock(&balloon_plist_lock); + list_splice(&tmp_list, &balloon_plist); + balloon_size += allocated; + totalram_pages -= allocated; + dprintk("%s: current balloon size=%d\n", __FUNCTION__, + balloon_size); + spin_unlock(&balloon_plist_lock); + return allocated; + +out_free: + list_for_each_entry_safe(node, tmp,...
2008 Jan 08
1
[PATCH] kvm guest balloon driver
...de->bpage) { + kfree(node); + goto out_free; + } + + list_add(&node->bp_list, &tmp_list); + allocated++; + *pfn = page_to_pfn(node->bpage); + pfn++; + } + + r = send_balloon_buf(CMD_BALLOON_INFLATE, buf); + if (r) + goto out_free; + + spin_lock(&balloon_plist_lock); + list_splice(&tmp_list, &balloon_plist); + balloon_size += allocated; + totalram_pages -= allocated; + dprintk("%s: current balloon size=%d\n", __FUNCTION__, + balloon_size); + spin_unlock(&balloon_plist_lock); + return allocated; + +out_free: + list_for_each_entry_safe(node, tmp,...
2008 Jan 14
6
[PATCH] KVM virtio balloon driver
...oc_page(GFP_HIGHUSER | __GFP_NORETRY); + if (!page) + goto out_free; + list_add(&page->lru, &tmp_list); + allocated++; + *pfn = page_to_pfn(page); + pfn++; + } + + r = send_balloon_buf(v, CMD_BALLOON_INFLATE, buf); + if (r) + goto out_free; + + spin_lock(&v->plist_lock); + list_splice(&tmp_list, &v->balloon_plist); + v->balloon_size += allocated; + totalram_pages -= allocated; + dprintk(&v->vdev->dev, "%s: current balloon size=%d\n", __func__, + v->balloon_size); + spin_unlock(&v->plist_lock); + return allocated; + +out_free: + list_...
2008 Jan 14
6
[PATCH] KVM virtio balloon driver
...oc_page(GFP_HIGHUSER | __GFP_NORETRY); + if (!page) + goto out_free; + list_add(&page->lru, &tmp_list); + allocated++; + *pfn = page_to_pfn(page); + pfn++; + } + + r = send_balloon_buf(v, CMD_BALLOON_INFLATE, buf); + if (r) + goto out_free; + + spin_lock(&v->plist_lock); + list_splice(&tmp_list, &v->balloon_plist); + v->balloon_size += allocated; + totalram_pages -= allocated; + dprintk(&v->vdev->dev, "%s: current balloon size=%d\n", __func__, + v->balloon_size); + spin_unlock(&v->plist_lock); + return allocated; + +out_free: + list_...
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...= interval_tree_iter_first(&amn->objects, addr, end))) { > - kfree(node); > - node = container_of(it, struct amdgpu_mn_node, it); > - interval_tree_remove(&node->it, &amn->objects); > - addr = min(it->start, addr); > - end = max(it->last, end); > - list_splice(&node->bos, &bos); > - } > - > - if (!node) > - node = new_node; > + if (bo->kfd_bo) > + bo->notifier.ops = &amdgpu_mn_hsa_ops; > else > - kfree(new_node); > - > - bo->mn = amn; > - > - node->it.start = addr; > - node->it.l...
2016 Jan 04
0
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...amp;b_dev_info->pages_lock, flags); > unlock_page(page); > + put_page(page); > dequeued_page = true; > break; > } > + put_page(page); > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > } > > + /* re-add remaining entries */ > + list_splice(&processed, &b_dev_info->pages); > + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > + > if (!dequeued_page) { > /* > * If we are unable to dequeue a balloon page because the page -- Kind regards, Minchan Kim
2019 Oct 28
2
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...amn->lock); - - while ((it = interval_tree_iter_first(&amn->objects, addr, end))) { - kfree(node); - node = container_of(it, struct amdgpu_mn_node, it); - interval_tree_remove(&node->it, &amn->objects); - addr = min(it->start, addr); - end = max(it->last, end); - list_splice(&node->bos, &bos); - } - - if (!node) - node = new_node; + if (bo->kfd_bo) + bo->notifier.ops = &amdgpu_mn_hsa_ops; else - kfree(new_node); - - bo->mn = amn; - - node->it.start = addr; - node->it.last = end; - INIT_LIST_HEAD(&node->bos); - list_splice(&amp...
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).