search for: release_lock

Displaying 20 results from an estimated 36 matches for "release_lock".

2015 Jun 06
0
[PATCH 2/5] threads: Acquire and release the lock around each public guestfs_* API.
Since each ACQUIRE_LOCK/RELEASE_LOCK call must balance, this code is difficult to debug. Enable DEBUG_LOCK to add some prints which can help. The only definitive list of public APIs is found indirectly in the generator (in generator/c.ml : globals). --- generator/c.ml | 18 ++++++++++++++ src/errors.c | 66 +++++++...
2015 Jun 06
7
[PATCH 0/5] Add support for thread-safe handle.
This patch isn't ready to go upstream. In fact, I think we might do a quick 1.30 release soon, and save this patch, and also the extensive changes proposed for the test suite[1], to after 1.30. Currently it is not safe to use the same handle from multiple threads, unless you implement your own mutexes. See: http://libguestfs.org/guestfs.3.html#multiple-handles-and-multiple-threads These
2015 Jun 11
1
Re: [PATCH 2/5] threads: Acquire and release the lock around each public guestfs_* API.
Hi, On Saturday 06 June 2015 14:20:38 Richard W.M. Jones wrote: > Since each ACQUIRE_LOCK/RELEASE_LOCK call must balance, this code is > difficult to debug. Enable DEBUG_LOCK to add some prints which can > help. There's some way this could be simplified: > const char * > guestfs_last_error (guestfs_h *g) > { > - return g->last_error; > + const char *r; > + >...
2014 Jul 09
0
[PATCH 10/17] drm/qxl: rework to new fence interface
...6674..0d144e0646d6 100644 --- a/drivers/gpu/drm/qxl/qxl_debugfs.c +++ b/drivers/gpu/drm/qxl/qxl_debugfs.c @@ -57,11 +57,21 @@ qxl_debugfs_buffers_info(struct seq_file *m, void *data) struct qxl_device *qdev = node->minor->dev->dev_private; struct qxl_bo *bo; + spin_lock(&qdev->release_lock); list_for_each_entry(bo, &qdev->gem.objects, list) { + struct reservation_object_list *fobj; + int rel; + + rcu_read_lock(); + fobj = rcu_dereference(bo->tbo.resv->fence); + rel = fobj ? fobj->shared_count : 0; + rcu_read_unlock(); + seq_printf(m, "size %ld, pc %d,...
2010 Jul 08
0
Bug#588406: xen-utils-common: /etc/xen/scripts/block not driving helper scripts; XEN_SCRIPT_DIR not properly set
...uest' ] then dom='a guest ' when='now' else dom='the privileged ' when='by a guest' fi if [ "$mode" = 'w' ] then m1='' m2='' else m1='read-write ' m2='read-only ' fi release_lock "block" ebusy \ "${prefix}${m1}in ${dom}domain, and so cannot be mounted ${m2}${when}." } t=$(xenstore_read_default "$XENBUS_PATH/type" 'MISSING') case "$command" in add) phys=$(xenstore_read_default "$XENBUS_PATH/physical-device" ...
2012 Sep 21
1
PATCH [base vtpm and libxl patches 3/6] Fix bugs in vtpm hotplug scripts
...db_add_instance $uuid $instance + else + vtpmdb_add_instance $domname $instance fi else if [ "$reason" == "resume" ]; then @@ -290,7 +288,6 @@ function vtpm_create_instance () { vtpm_start $instance fi fi - release_lock vtpmdb xenstore_write $XENBUS_PATH/instance $instance @@ -322,8 +319,8 @@ function vtpm_remove_instance () { if [ "$instance" != "0" ]; then vtpm_suspend $instance fi - release_lock vtpmdb + } diff --git a/tools/hotplug/Linux/vtpm-delete b/too...
2014 May 14
0
[RFC PATCH v1 12/16] drm/ttm: flip the switch, and convert to dma_fence
...bj %p, num releases %d\n", - (unsigned long)bo->gem_base.size, bo->pin_count, - bo->tbo.sync_obj, rel); + seq_printf(m, "size %ld, pc %d, num releases %d\n", + (unsigned long)bo->gem_base.size, + bo->pin_count, rel); } spin_unlock(&qdev->release_lock); return 0; diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index d547cbdebeb4..74e2117ee0e6 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -280,9 +280,7 @@ struct qxl_device { uint8_t slot_gen_bits; uint64_t va_slot_mask; - /*...
2014 Jul 09
0
[PATCH 13/17] drm/ttm: flip the switch, and convert to dma_fence
...bj %p, num releases %d\n", - (unsigned long)bo->gem_base.size, bo->pin_count, - bo->tbo.sync_obj, rel); + seq_printf(m, "size %ld, pc %d, num releases %d\n", + (unsigned long)bo->gem_base.size, + bo->pin_count, rel); } spin_unlock(&qdev->release_lock); return 0; diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index d547cbdebeb4..74e2117ee0e6 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -280,9 +280,7 @@ struct qxl_device { uint8_t slot_gen_bits; uint64_t va_slot_mask; - /*...
2012 Oct 13
4
[PATCH] hotplug/Linux: close lockfd after lock attempt
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1350143934 -7200 # Node ID 5aa14d5afe6b1f35b23029ae90b7edb20367bbeb # Parent e0e1350dfe9b7a6cacb1378f75d8e6536d22eb2d hotplug/Linux: close lockfd after lock attempt When a HVM guest is shutdown some of the ''remove'' events can not claim the lock for some reason. Instead they try to grab the lock in a busy
2019 Oct 29
0
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...ble(&mm->mmap_sem)) { > - mutex_unlock(&adev->mn_lock); > - return ERR_PTR(-EINTR); > - } > - > - hash_for_each_possible(adev->mn_hash, amn, node, key) > - if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm, > - amn->type) == key) > - goto release_locks; > - > - amn = kzalloc(sizeof(*amn), GFP_KERNEL); > - if (!amn) { > - amn = ERR_PTR(-ENOMEM); > - goto release_locks; > - } > - > - amn->adev = adev; > - init_rwsem(&amn->lock); > - amn->type = type; > - > - amn->mirror.ops = &amdgpu_hmm_mi...
2019 Oct 28
1
[PATCH v2 14/15] drm/amdgpu: Use mmu_range_notifier instead of hmm_mirror
...dev->mn_lock); - if (down_write_killable(&mm->mmap_sem)) { - mutex_unlock(&adev->mn_lock); - return ERR_PTR(-EINTR); - } - - hash_for_each_possible(adev->mn_hash, amn, node, key) - if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm, - amn->type) == key) - goto release_locks; - - amn = kzalloc(sizeof(*amn), GFP_KERNEL); - if (!amn) { - amn = ERR_PTR(-ENOMEM); - goto release_locks; - } - - amn->adev = adev; - init_rwsem(&amn->lock); - amn->type = type; - - amn->mirror.ops = &amdgpu_hmm_mirror_ops[type]; - r = hmm_mirror_register(&amn->mirro...
2019 Nov 12
0
[PATCH v3 12/14] drm/amdgpu: Use mmu_interval_notifier instead of hmm_mirror
...dev->mn_lock); - if (down_write_killable(&mm->mmap_sem)) { - mutex_unlock(&adev->mn_lock); - return ERR_PTR(-EINTR); - } - - hash_for_each_possible(adev->mn_hash, amn, node, key) - if (AMDGPU_MN_KEY(amn->mirror.hmm->mmu_notifier.mm, - amn->type) == key) - goto release_locks; - - amn = kzalloc(sizeof(*amn), GFP_KERNEL); - if (!amn) { - amn = ERR_PTR(-ENOMEM); - goto release_locks; - } - - amn->adev = adev; - init_rwsem(&amn->lock); - amn->type = type; - - amn->mirror.ops = &amdgpu_hmm_mirror_ops[type]; - r = hmm_mirror_register(&amn->mirro...
2014 May 14
17
[RFC PATCH v1 00/16] Convert all ttm drivers to use the new reservation interface
This series depends on the previously posted reservation api patches. 2 of them are not yet in for-next-fences branch of git://git.linaro.org/people/sumit.semwal/linux-3.x.git The missing patches are still in my vmwgfx_wip branch at git://people.freedesktop.org/~mlankhorst/linux All ttm drivers are converted to the fence api, fence_lock is removed and rcu is used in its place. qxl is the first
2014 Jul 31
19
[PATCH 01/19] fence: add debugging lines to fence_is_signaled for the callback
fence_is_signaled callback should support being run in atomic context, but not in irq context. Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com> --- include/linux/fence.h | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/include/linux/fence.h b/include/linux/fence.h index d174585b874b..c1a4519ba2f5 100644 ---
2014 Jul 09
22
[PATCH 00/17] Convert TTM to the new fence interface.
This series applies on top of the driver-core-next branch of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git Before converting ttm to the new fence interface I had to fix some drivers to require a reservation before poking with fence_obj. After flipping the switch RCU becomes available instead, and the extra reservations can be dropped again. :-) I've done at least basic
2015 Jun 06
0
[PATCH 3/5] threads: Use thread-local storage for errors.
...*g, guestfs_error_handler_cb cb, void *data) { + struct error_data *error_data; + ACQUIRE_LOCK (g); - g->error_cb = cb; - g->error_cb_data = data; + error_data = get_error_data (g); + error_data->error_cb = cb; + error_data->error_cb_data = data; RELEASE_LOCK (g); } static guestfs_error_handler_cb unlocked_get_error_handler (guestfs_h *g, void **data_rtn) { - if (data_rtn) *data_rtn = g->error_cb_data; - return g->error_cb; + struct error_data *error_data = get_error_data (g); + + if (data_rtn) *data_rtn = error_data->error_cb_data; +...
2018 Dec 12
0
[PATCH v2 05/18] drm/qxl: drop unused fields from struct qxl_device
...5d4fff1..3ebe66abf2 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -232,9 +232,6 @@ struct qxl_device { struct qxl_memslot main_slot; struct qxl_memslot surfaces_slot; - uint8_t slot_id_bits; - uint8_t slot_gen_bits; - uint64_t va_slot_mask; spinlock_t release_lock; struct idr release_idr; diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c index a9288100ae..3c1753667d 100644 --- a/drivers/gpu/drm/qxl/qxl_kms.c +++ b/drivers/gpu/drm/qxl/qxl_kms.c @@ -78,9 +78,9 @@ static void setup_slot(struct qxl_device *qdev, slot->generation...
2006 Jun 07
0
dhcp problem with vif-nat
...06-06 18:12:11.000000000 +0100 +++ vif-nat 2006-06-07 11:53:15.000000000 +0100 @@ -110,7 +110,7 @@ echo >>"$dhcpd_conf_file" \ "host $hostname { hardware ethernet $mac; fixed-address $vif_ip; option routers $router_ip; option host-name \"$hostname\"; }" release_lock "vif-nat-dhcp" - "$dhcpd_init_file" restart || true + "$dhcpd_init_file" restart && true Any ideas to debug this ? Anup _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource....
2009 Aug 23
0
Bug#503044: xen-utils-common: should make the loopback device default to supporting more nodes
...ev/loop%.f' 0 1048575 | \ + grep -Fxv -m1 -f <(echo /dev/loop* | tr ' ' '\n')) && \ + mknod "$loopdev" b 7 "${loopdev#/dev/loop}" + fi + if [ "$loopdev" = '' ] then release_lock "block" Anders
2014 Aug 20
1
Dispatching calls question
I have a question about dispatching calls... If I try to dispatch a call on line 1 using the AMI and I check in my table to see if line 1 is available and it is.... So I have done my checking now I dispatch my call and at that same time a call comes in on line 1 and now its no longer available for me to make a call, I connect on AMI and my call fails.... How do I prevent this from happening?