search for: spin_lock

Displaying 20 results from an estimated 1172 matches for "spin_lock".

2014 May 14
0
[RFC PATCH v1 06/16] drm/ttm: kill fence_lock
...b/drivers/gpu/drm/nouveau/nouveau_bo.c index 33eb7164525a..e98af2e9a1cb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1196,9 +1196,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr, } /* Fallback to software copy. */ - spin_lock(&bo->bdev->fence_lock); ret = ttm_bo_wait(bo, true, intr, no_wait_gpu); - spin_unlock(&bo->bdev->fence_lock); if (ret == 0) ret = ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem); @@ -1425,26 +1423,19 @@ nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm) ttm_pool_unpopu...
2010 Aug 04
6
[PATCH -v2 0/3] jbd2 scalability patches
This version fixes three bugs in the 2nd patch of this series that caused kernel BUG when the system was under race. We weren't accounting with t_oustanding_credits correctly, and there were race conditions caused by the fact the I had overlooked the fact that __jbd2_log_wait_for_space() and jbd2_get_transaction() requires j_state_lock to be write locked. Theodore Ts'o (3): jbd2: Use
2023 Mar 02
1
[PATCH v2 7/8] vdpa_sim: replace the spinlock with a mutex to protect the state
...m.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 6feb29726c2a..a28103a67ae7 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -166,7 +166,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, if (IS_ERR(vdpasim->worker)) goto err_iommu; - spin_lock_init(&vdpasim->lock); + mutex_init(&vdpasim->mutex); spin_lock_init(&vdpasim->iommu_lock); dev = &vdpasim->vdpa.dev; @@ -275,13 +275,13 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready) struct vdpasim_virtqueue *vq = &vdpasim-...
2011 Dec 02
3
[PATCH] Btrfs: protect orphan block rsv with spin_lock
We''ve been seeing warnings coming out of the orphan commit stuff forever from ceph. Turns out it''s because we''re racing with checking if the orphan block reserve is set, because we clear it outside of the spin_lock. So leave the normal fastpath checks where they are, but take the spin_lock and _recheck_ to make sure we haven''t had an orphan block rsv added in the meantime. Then clear the root''s orphan block rsv and release the lock. With this patch a user said the warnings went away and t...
2020 Sep 09
0
[PATCH] vhost_vdpa: remove unnecessary spin_lock in vhost_vring_call
On 2020/9/9 ??2:52, Zhu Lingshan wrote: > This commit removed unnecessary spin_locks in vhost_vring_call > and related operations. Because we manipulate irq offloading > contents in vhost_vdpa ioctl code path which is already > protected by dev mutex and vq mutex. > > Signed-off-by: Zhu Lingshan <lingshan.zhu at intel.com> Acked-by: Jason Wang <jasowang a...
2010 Jun 19
3
[PATCH 1/1] ocfs2 fix o2dlm dlm run purgelist
...d, 26 insertions(+), 29 deletions(-) diff --git a/fs/ocfs2/dlm/dlmthread.c b/fs/ocfs2/dlm/dlmthread.c index 11a6d1f..79d1ef6 100644 --- a/fs/ocfs2/dlm/dlmthread.c +++ b/fs/ocfs2/dlm/dlmthread.c @@ -158,15 +158,6 @@ static int dlm_purge_lockres(struct dlm_ctxt *dlm, int master; int ret = 0; - spin_lock(&res->spinlock); - if (!__dlm_lockres_unused(res)) { - mlog(0, "%s:%.*s: tried to purge but not unused\n", - dlm->name, res->lockname.len, res->lockname.name); - __dlm_print_one_lock_resource(res); - spin_unlock(&res->spinlock); - BUG(); - } - if (res-&g...
2006 Apr 28
2
kernel panic - spin_lock
Guys, one of our boxes just died with the following error: kernel panic - not syncing: fs/block_dev.c:396: spin_lock (fs/block_dev.c:c0361c0) already locked by fs/block_dev.c/287. The system's an LVS running CentOS 4.3: centos-release-4-3.2 kernel-2.6.9-34.EL ipvsadm-1.24-6 heartbeat-1.2.3.cvs.20050927-1.centos4 I note that there's a bug report filed related to CentOS 4.2: http://bugs.centos.org/view....
2009 May 03
1
Deadlock in dlmmaster.c
...ock in fs/ocfs2/dlm/dlmmaster.c - version 2.6.28 (probably this code is in newer versions too). Could someone confirm this? Thank you. fs/ocfs2/dlm/dlmmaster.c ================== function dlm_master_request_handler: (res->spinlock <- dlm->master_lock) ----------------------------------- spin_lock(&res->spinlock); at line 1427 spin_lock(&dlm->master_lock); at line 1475 function dlm_migrate_request_handler: (dlm->master_lock <- res->spinlock) ------------------------------------------------------- spin_lock(&dlm->master_lock) at line 3036 spin_lock(&res->...
2012 May 25
0
[PATCH 3/3] gnttab: cleanup
...= -1) ) + lgt = ld->grant_table; + if ( unlikely((handle = get_maptrack_handle(lgt)) == -1) ) { rcu_unlock_domain(rd); gdprintk(XENLOG_INFO, "Failed to obtain maptrack handle.\n"); @@ -533,26 +534,27 @@ __gnttab_map_grant_ref( return; } - spin_lock(&rd->grant_table->lock); + rgt = rd->grant_table; + spin_lock(&rgt->lock); - if ( rd->grant_table->gt_version == 0 ) + if ( rgt->gt_version == 0 ) PIN_FAIL(unlock_out, GNTST_general_error, "remote grant table not yet set up&...
2020 Jul 31
0
[PATCH] vdpasim: protect concurrent access to iommu iotlb
...eration; u64 features; + /* spinlock to synchronize iommu table */ + spinlock_t iommu_lock; }; static struct vdpasim *vdpasim_dev; @@ -118,7 +120,9 @@ static void vdpasim_reset(struct vdpasim *vdpasim) for (i = 0; i < VDPASIM_VQ_NUM; i++) vdpasim_vq_reset(&vdpasim->vqs[i]); + spin_lock(&vdpasim->iommu_lock); vhost_iotlb_reset(vdpasim->iommu); + spin_unlock(&vdpasim->iommu_lock); vdpasim->features = 0; vdpasim->status = 0; @@ -236,8 +240,10 @@ static dma_addr_t vdpasim_map_page(struct device *dev, struct page *page, /* For simplicity, use identical...
2009 Jul 18
0
[PATCH 5/6] fs/btrfs: convert nested spin_lock_irqsave to spin_lock
From: Julia Lawall <julia@diku.dk> If spin_lock_irqsave is called twice in a row with the same second argument, the interrupt state at the point of the second call overwrites the value saved by the first call. Indeed, the second call does not need to save the interrupt state, so it is changed to a simple spin_lock. The semantic match that find...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...ess and bailed out so there was nothing to do for the range_end > callback. Single notifiers are not the problem. I tried to make this clear in the commit message, but lets be more explicit. We have *two* notifiers registered to the mm, A and B: A invalidate_range_start: (has no blocking) spin_lock() counter++ spin_unlock() A invalidate_range_end: spin_lock() counter-- spin_unlock() And this one: B invalidate_range_start: (has blocking) if (!try_mutex_lock()) return -EAGAIN; counter++ mutex_unlock() B invalidate_range_end: spin_lock() counte...
2010 Jun 16
2
[PATCH] ocfs2/dlm: check dlm_state under spinlock
...changed, 6 insertions(+), 7 deletions(-) diff --git a/fs/ocfs2/dlm/dlmdomain.c b/fs/ocfs2/dlm/dlmdomain.c index 6b5a492..ab82add 100644 --- a/fs/ocfs2/dlm/dlmdomain.c +++ b/fs/ocfs2/dlm/dlmdomain.c @@ -796,7 +796,7 @@ static int dlm_query_join_handler(struct o2net_msg *msg, u32 len, void *data, spin_lock(&dlm_domain_lock); dlm = __dlm_lookup_domain_full(query->domain, query->name_len); if (!dlm) - goto unlock_respond; + goto unlock_domain_respond; /* * There is a small window where the joining node may not see the @@ -811,7 +811,7 @@ static int dlm_query_join_handler(struct o...
2023 May 31
1
[PATCH V2] virtio-fs: Improved request latencies when Virtio queue is full
...f(work, struct virtio_fs_vq, - dispatch_work.work); + dispatch_work); int ret; pr_debug("virtio-fs: worker %s called.\n", __func__); @@ -388,8 +391,6 @@ static void virtio_fs_request_dispatch_work(struct work_struct *work) if (ret == -ENOMEM || ret == -ENOSPC) { spin_lock(&fsvq->lock); list_add_tail(&req->list, &fsvq->queued_reqs); - schedule_delayed_work(&fsvq->dispatch_work, - msecs_to_jiffies(1)); spin_unlock(&fsvq->lock); return; } @@ -436,8 +437,6 @@ static int send_forget_request(struct virtio...
2008 Jul 10
0
[PATCH] Substitue the duplicate spin_lock_irqsave to spin_lock in the vt-d code path
The patch removes the duplicate spin_lock_irqsave to spin_lock in the Vt-d code path. The duplicate spin_lock_irqsave() flushes the original EFLAGS saved, and thus disable the local irq. Signed-off-by: Xin, Xiaohui Xiaohui.xin@intel.com Signed-off-by: Tian, Kevin <Kevin.Tian@intel.com> _____________________________________...
2015 May 04
2
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
...to reduce the size of the inlined spinlock functions. When >> running on bare metal unlocking is again basically one instruction. > > Out of curiosity, is there a measurable difference? I did a small measurement of the pure locking functions on bare metal without and with my patches. spin_lock() for the first time (lock and code not in cache) dropped from about 600 to 500 cycles. spin_unlock() for first time dropped from 145 to 87 cycles. spin_lock() in a loop dropped from 48 to 45 cycles. spin_unlock() in the same loop dropped from 24 to 22 cycles. Juergen
2015 May 04
2
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
...to reduce the size of the inlined spinlock functions. When >> running on bare metal unlocking is again basically one instruction. > > Out of curiosity, is there a measurable difference? I did a small measurement of the pure locking functions on bare metal without and with my patches. spin_lock() for the first time (lock and code not in cache) dropped from about 600 to 500 cycles. spin_unlock() for first time dropped from 145 to 87 cycles. spin_lock() in a loop dropped from 48 to 45 cycles. spin_unlock() in the same loop dropped from 24 to 22 cycles. Juergen
2019 Oct 15
7
[PATCH 0/5] virtiofs: Fix couple of deadlocks
Hi, We have couple of places which can result in deadlock. This patch series fixes these. We can be called with fc->bg_lock (for background requests) while submitting a request. This leads to two constraints. - We can't end requests in submitter's context and call fuse_end_request() as it tries to take fc->bg_lock as well. So queue these requests on a list and use a worker to
2017 Mar 01
2
[PATCH] drm: virtio: use kmem_cache
...++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -74,51 +74,19 @@ void virtio_gpu_cursor_ack(struct virtqueue *vq) int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev) { - struct virtio_gpu_vbuffer *vbuf; - int i, size, count = 16; - void *ptr; - - INIT_LIST_HEAD(&vgdev->free_vbufs); - spin_lock_init(&vgdev->free_vbufs_lock); - count += virtqueue_get_vring_size(vgdev->ctrlq.vq); - count += virtqueue_get_vring_size(vgdev->cursorq.vq); - size = count * VBUFFER_SIZE; - DRM_INFO("virtio vbuffers: %d bufs, %zdB each, %dkB total.\n", - count, VBUFFER_SIZE, size / 1024);...
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees. The intention is to use it to speed up scrub in a first run, but balance is another hot candidate. In general, every tree walk could be accompanied by a readahead. Deletion of large files comes to mind, where the fetching of the csums takes most of the time. Also the initial build-ups of free-space-caches and