Displaying 20 results from an estimated 163 matches for "kref_get".
2010 Jan 18
2
[PATCH] drm/ttm: Fix race condition in ttm_bo_delayed_delete
...LL;
+ if (list_empty(&bdev->ddestroy)) {
+ spin_unlock(&glob->lru_lock);
+ return 0;
+ }
- /*
- * Protect the next list entry from destruction while we
- * unlock the lru_lock.
- */
+ entry = list_first_entry(&bdev->ddestroy,
+ struct ttm_buffer_object, ddestroy);
+ kref_get(&entry->list_kref);
- if (next != &bdev->ddestroy) {
- nentry = list_entry(next, struct ttm_buffer_object,
- ddestroy);
+ for (;;) {
+ struct ttm_buffer_object *nentry = NULL;
+
+ if (!list_empty(&entry->ddestroy)
+ && entry->ddestroy.next != &bd...
2010 Jan 18
1
[PATCH] drm/ttm: Fix race condition in ttm_bo_delayed_delete (v2)
...LL;
+ if (list_empty(&bdev->ddestroy)) {
+ spin_unlock(&glob->lru_lock);
+ return 0;
+ }
- /*
- * Protect the next list entry from destruction while we
- * unlock the lru_lock.
- */
+ entry = list_first_entry(&bdev->ddestroy,
+ struct ttm_buffer_object, ddestroy);
+ kref_get(&entry->list_kref);
+
+ for (;;) {
+ struct ttm_buffer_object *nentry = NULL;
- if (next != &bdev->ddestroy) {
- nentry = list_entry(next, struct ttm_buffer_object,
- ddestroy);
+ if (entry->ddestroy.next != &bdev->ddestroy) {
+ nentry = list_first_entry(&am...
2019 Sep 27
1
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...drivers/gpu/drm/drm_dp_mst_topology.c
> index 2fe24e366925..5b5c0b3b3c0e 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1327,7 +1327,8 @@ static void
> drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb)
> {
> kref_get(&mstb->malloc_kref);
> - DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref));
> + DRM_DEBUG("mstb %p/%px (%d)\n",
> + mstb, mstb, kref_read(&mstb->malloc_kref));
> }
>
> /**
> @@ -1344,7 +1345,8 @@ drm_dp_mst_get_mstb...
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...n =
> - VHOST_DMA_IN_PROGRESS;
> - ubuf->callback = vhost_zerocopy_callback;
> - ubuf->ctx = nvq->ubufs;
> - ubuf->desc = nvq->upend_idx;
> - msg.msg_control = ubuf;
> - msg.msg_controllen = sizeof(ubuf);
> - ubufs = nvq->ubufs;
> - kref_get(&ubufs->kref);
> - }
> + vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS;
> + ubuf->callback = vhost_zerocopy_callback;
> + ubuf->ctx = nvq->ubufs;
> + ubuf->desc = nvq->upend_idx;
> + msg.msg_control = ubuf;
> + msg.msg_controlle...
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...n =
> - VHOST_DMA_IN_PROGRESS;
> - ubuf->callback = vhost_zerocopy_callback;
> - ubuf->ctx = nvq->ubufs;
> - ubuf->desc = nvq->upend_idx;
> - msg.msg_control = ubuf;
> - msg.msg_controllen = sizeof(ubuf);
> - ubufs = nvq->ubufs;
> - kref_get(&ubufs->kref);
> - }
> + vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS;
> + ubuf->callback = vhost_zerocopy_callback;
> + ubuf->ctx = nvq->ubufs;
> + ubuf->desc = nvq->upend_idx;
> + msg.msg_control = ubuf;
> + msg.msg_controlle...
2019 Jan 09
0
[PATCH v5 06/20] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...#39;t
have been fixed properly beforehand:
- CPU1 unrefs port from topology (refcount 1->0)
- CPU2 refs port in topology(refcount 0->1)
Since we now can guarantee memory safety for ports and branches
as-needed, we also can make our main reference counting functions fix
this problem by using kref_get_unless_zero() internally so that topology
refcounts can only ever reach 0 once.
Changes since v3:
* Remove rebase detritus - danvet
* Split out purely style changes into separate patches - hwentlan
Changes since v2:
* Fix commit message - checkpatch
* s/)-1/) - 1/g - checkpatch
Changes since v1:...
2019 Sep 03
0
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...rs/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 2fe24e366925..5b5c0b3b3c0e 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -1327,7 +1327,8 @@ static void
drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb)
{
kref_get(&mstb->malloc_kref);
- DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref));
+ DRM_DEBUG("mstb %p/%px (%d)\n",
+ mstb, mstb, kref_read(&mstb->malloc_kref));
}
/**
@@ -1344,7 +1345,8 @@ drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *m...
2018 Dec 14
2
[WIP PATCH 03/15] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...rehand:
>
> - CPU1 unrefs port from topology (refcount 1->0)
> - CPU2 refs port in topology(refcount 0->1)
>
> Since we now can guarantee memory safety for ports and branches
> as-needed, we also can make our main reference counting functions fix
> this problem by using kref_get_unless_zero() internally so that topology
> refcounts can only ever reach 0 once.
>
> Signed-off-by: Lyude Paul <lyude at redhat.com>
> Cc: Daniel Vetter <daniel at ffwll.ch>
> Cc: David Airlie <airlied at redhat.com>
> Cc: Jerry Zuo <Jerry.Zuo at amd.com>...
2019 Jan 05
0
[PATCH v4 02/16] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...#39;t
have been fixed properly beforehand:
- CPU1 unrefs port from topology (refcount 1->0)
- CPU2 refs port in topology(refcount 0->1)
Since we now can guarantee memory safety for ports and branches
as-needed, we also can make our main reference counting functions fix
this problem by using kref_get_unless_zero() internally so that topology
refcounts can only ever reach 0 once.
Changes since v2:
* Fix commit message - checkpatch
Changes since v1:
* Remove forward declarations - danvet
* Move "Branch device and port refcounting" section from documentation
into kernel-doc comments -...
2010 Apr 05
2
Kernel BUG
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20100405/b0bb4b91/attachment-0003.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GilbertoNunesFerreira_html_185aa38c.jpg
Type: image/jpeg
Size: 16538 bytes
Desc: not available
URL:
2018 Dec 19
1
[WIP PATCH 03/15] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...y (refcount 1->0)
> > > - CPU2 refs port in topology(refcount 0->1)
> > >
> > > Since we now can guarantee memory safety for ports and branches
> > > as-needed, we also can make our main reference counting functions fix
> > > this problem by using kref_get_unless_zero() internally so that topology
> > > refcounts can only ever reach 0 once.
> > >
> > > Signed-off-by: Lyude Paul <lyude at redhat.com>
> > > Cc: Daniel Vetter <daniel at ffwll.ch>
> > > Cc: David Airlie <airlied at redhat.com&g...
2018 Dec 18
0
[WIP PATCH 03/15] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...unrefs port from topology (refcount 1->0)
> > - CPU2 refs port in topology(refcount 0->1)
> >
> > Since we now can guarantee memory safety for ports and branches
> > as-needed, we also can make our main reference counting functions fix
> > this problem by using kref_get_unless_zero() internally so that topology
> > refcounts can only ever reach 0 once.
> >
> > Signed-off-by: Lyude Paul <lyude at redhat.com>
> > Cc: Daniel Vetter <daniel at ffwll.ch>
> > Cc: David Airlie <airlied at redhat.com>
> > Cc: Jerry Zu...
2010 Jan 20
0
[PATCH] drm/ttm: Fix race condition in ttm_bo_delayed_delete (v3, final)
...uffer_object, ddestroy);
- nentry = NULL;
+ if (list_empty(&bdev->ddestroy))
+ goto out_unlock;
- /*
- * Protect the next list entry from destruction while we
- * unlock the lru_lock.
- */
+ entry = list_first_entry(&bdev->ddestroy,
+ struct ttm_buffer_object, ddestroy);
+ kref_get(&entry->list_kref);
+
+ for (;;) {
+ struct ttm_buffer_object *nentry = NULL;
- if (next != &bdev->ddestroy) {
- nentry = list_entry(next, struct ttm_buffer_object,
- ddestroy);
+ if (entry->ddestroy.next != &bdev->ddestroy) {
+ nentry = list_first_entry(&am...
2018 Dec 14
0
[WIP PATCH 03/15] drm/dp_mst: Introduce new refcounting scheme for mstbs and ports
...#39;t
have been fixed properly beforehand:
- CPU1 unrefs port from topology (refcount 1->0)
- CPU2 refs port in topology(refcount 0->1)
Since we now can guarantee memory safety for ports and branches
as-needed, we also can make our main reference counting functions fix
this problem by using kref_get_unless_zero() internally so that topology
refcounts can only ever reach 0 once.
Signed-off-by: Lyude Paul <lyude at redhat.com>
Cc: Daniel Vetter <daniel at ffwll.ch>
Cc: David Airlie <airlied at redhat.com>
Cc: Jerry Zuo <Jerry.Zuo at amd.com>
Cc: Harry Wentland <harry....
2014 Sep 18
2
[PATCH v2 3/6] hw_random: use reference counts on each struct hwrng.
...eanup_rng(struct kref *kref)
> +{
> + struct hwrng *rng = container_of(kref, struct hwrng, ref);
> +
> + if (rng->cleanup)
> + rng->cleanup(rng);
> +}
> +
> +static void set_current_rng(struct hwrng *rng)
> +{
> + BUG_ON(!mutex_is_locked(&rng_mutex));
> + kref_get(&rng->ref);
> + current_rng = rng;
> +}
> +
> +static void drop_current_rng(void)
> +{
> + BUG_ON(!mutex_is_locked(&rng_mutex));
> + if (!current_rng)
> + return;
> +
> + kref_put(¤t_rng->ref, cleanup_rng);
> + current_rng = NULL;
> +}
&g...
2014 Sep 18
2
[PATCH v2 3/6] hw_random: use reference counts on each struct hwrng.
...eanup_rng(struct kref *kref)
> +{
> + struct hwrng *rng = container_of(kref, struct hwrng, ref);
> +
> + if (rng->cleanup)
> + rng->cleanup(rng);
> +}
> +
> +static void set_current_rng(struct hwrng *rng)
> +{
> + BUG_ON(!mutex_is_locked(&rng_mutex));
> + kref_get(&rng->ref);
> + current_rng = rng;
> +}
> +
> +static void drop_current_rng(void)
> +{
> + BUG_ON(!mutex_is_locked(&rng_mutex));
> + if (!current_rng)
> + return;
> +
> + kref_put(¤t_rng->ref, cleanup_rng);
> + current_rng = NULL;
> +}
&g...
2008 Aug 09
4
Upgrade 3.0.3 to 3.2.1
Hi,
i''m prepering to upgrade my servers from xen 3.0.3 32-bit to 3.2.1 64-bit.
The old system:
Debian 4.0 i386 with included hypervisor 3.0.3 (pae) and dom0 kernel.
The new systen:
Debian lenny amd64 with the included hypervisor 3.2.1 and dom0 kernel from
Debian 4.0 amd64.
My domUs have a self compiled kernel out of the dom0 kernel of the old system
(mainly the dom0 kernel but
2013 Jun 05
4
[PATCH] vhost_net: clear msg.control for non-zerocopy case during tx
...m>
---
drivers/vhost/net.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 2b51e23..b07d96b 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -436,7 +436,8 @@ static void handle_tx(struct vhost_net *net)
kref_get(&ubufs->kref);
}
nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV;
- }
+ } else
+ msg.msg_control = NULL;
/* TODO: Check specific error and bomb out unless ENOBUFS? */
err = sock->ops->sendmsg(NULL, sock, &msg, len);
if (unlikely(err < 0)) {
--
1.7....
2013 Jun 05
4
[PATCH] vhost_net: clear msg.control for non-zerocopy case during tx
...m>
---
drivers/vhost/net.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 2b51e23..b07d96b 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -436,7 +436,8 @@ static void handle_tx(struct vhost_net *net)
kref_get(&ubufs->kref);
}
nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV;
- }
+ } else
+ msg.msg_control = NULL;
/* TODO: Check specific error and bomb out unless ENOBUFS? */
err = sock->ops->sendmsg(NULL, sock, &msg, len);
if (unlikely(err < 0)) {
--
1.7....
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and