Displaying 15 results from an estimated 15 matches for "drop_lock".
Did you mean:
drop_locks
2019 Oct 02
0
DANGER WILL ROBINSON, DANGER
...struct_from_file(vmf->vma->vm_file);
...
again:
hmm_range_register(&range);
hmm_range_snapshot(&range);
take_lock(kvmms->update);
if (!hmm_range_valid(&range)) {
vm_insert_pfn();
drop_lock(kvmms->update);
hmm_range_unregister(&range);
return VM_FAULT_NOPAGE;
}
drop_lock(kvmms->update);
goto again;
}
The notifier callback:
kvmms_notifier_start() {
take_lock(kvmms->update);...
2017 Jun 22
0
[PATCH v2 03/14] drm/fb-helper: do a generic fb_setcmap helper in terms of crtc .gamma_set
...ap->start, cmap->green,
- cmap->len * sizeof(u16));
- memcpy(b + cmap->start, cmap->blue,
- cmap->len * sizeof(u16));
+ crtc = fb_helper->crtc_info[i].mode_set.crtc;
+ if (!crtc->funcs->gamma_set || !crtc->gamma_size) {
+ ret = -EINVAL;
+ goto drop_locks;
}
- for (j = 0; j < cmap->len; j++) {
- u16 hred, hgreen, hblue, htransp = 0xffff;
+ if (cmap->start + cmap->len > crtc->gamma_size) {
+ ret = -EINVAL;
+ goto drop_locks;
+ }
- hred = *red++;
- hgreen = *green++;
- hblue = *blue++;
+ r = crtc->gamma_sto...
2019 Oct 02
2
DANGER WILL ROBINSON, DANGER
...);
> ...
> again:
> hmm_range_register(&range);
> hmm_range_snapshot(&range);
> take_lock(kvmms->update);
> if (!hmm_range_valid(&range)) {
> vm_insert_pfn();
> drop_lock(kvmms->update);
> hmm_range_unregister(&range);
> return VM_FAULT_NOPAGE;
> }
> drop_lock(kvmms->update);
> goto again;
> }
>
> The notifier callback:
> kvmms_notifier_start() {...
2019 Oct 02
2
DANGER WILL ROBINSON, DANGER
...);
> ...
> again:
> hmm_range_register(&range);
> hmm_range_snapshot(&range);
> take_lock(kvmms->update);
> if (!hmm_range_valid(&range)) {
> vm_insert_pfn();
> drop_lock(kvmms->update);
> hmm_range_unregister(&range);
> return VM_FAULT_NOPAGE;
> }
> drop_lock(kvmms->update);
> goto again;
> }
>
> The notifier callback:
> kvmms_notifier_start() {...
2017 Jun 22
1
[PATCH v2 03/14] drm/fb-helper: do a generic fb_setcmap helper in terms of crtc .gamma_set
...c_funcs->load_lut(crtc);
+ ret = crtc->funcs->gamma_set(crtc, r, g, b,
+ crtc->gamma_size, &ctx);
+ if (ret)
+ break;
}
- out:
- drm_modeset_unlock_all(dev);
- return rc;
+out:
+ if (ret == -EDEADLK) {
+ drm_modeset_backoff(&ctx);
+ goto retry;
+ }
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
+
+ return ret;
}
EXPORT_SYMBOL(drm_fb_helper_setcmap);
--
2.1.4
2019 Oct 03
0
DANGER WILL ROBINSON, DANGER
...uct *kvmms = from_mmun(...);
unsigned long target_foff, size;
size = end - start;
target_foff = kvmms_convert_mirror_address(start);
take_lock(kvmms->mirror_fault_exclusion_lock);
unmap_mapping_range(kvmms->address_space, target_foff, size, 1);
drop_lock(kvmms->mirror_fault_exclusion_lock);
}
All that is needed is to make sure that vm_normal_page() will see those
pte (inside the process that is mirroring the other process) as special
which is the case either because insert_pfn() mark the pte as special or
the kvm device driver which control...
2006 Dec 14
2
604995471 7500 routers / upgrade issue
Hi Benjamin:
I think that the following link will give you an idea for what you need to
know:
http://www.cisco.com/warp/customer/620/roadmap_b.shtml
This is for the naming:
http://www.cisco.com/en/US/products/sw/iosswrel/ps1818/products_tech_note09186a0080101cda.shtml
In this case 11.1CC goes to 12.0T and 12.0T migrate to 12.1 mainline. Do not
you worry you will not loose anything with the new
2019 Aug 09
6
[RFC PATCH v6 71/92] mm: add support for remote mapping
From: Mircea C?rjaliu <mcirjaliu at bitdefender.com>
The following two new mm exports are introduced:
* mm_remote_map(struct mm_struct *req_mm,
unsigned long req_hva,
unsigned long map_hva)
* mm_remote_unmap(unsigned long map_hva)
* mm_remote_reset(void)
* rmap_walk_remote(struct page *page,
struct rmap_walk_control *rwc)
This patch
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2010 Feb 03
1
[PATCH] ocfs2: Plugs race between the dc thread and an unlock ast message
This patch plugs a race between the downconvert thread and an unlock ast message.
Specifically, after the downconvert worker has done its task, the dc thread needs
to check whether an unlock ast made the downconvert moot.
Reported-by: David Teigland <teigland at redhat.com>
Signed-off-by: Sunil Mushran <sunil.mushran at oracle.com>
Acked-by: Mark Fasheh <mfasheh at sus.com>
---
2007 Jan 25
1
X-UID gaps cause Dovecot/IMAP to hang
...0x390a0 in mbox_update_locking (mbox=0xdd100, lock_type=-4263232)
at mbox-lock.c:492
ctx = {mbox = 0xdd100, lock_status = {1, 0, 0, 0},
checked_file = false, lock_type = 2, dotlock_last_stale = true}
max_wait_time = -4263232
ret = 905472
i = 1169749509
drop_locks = false
#6 0x3932c in mbox_lock (mbox=0xdd100, lock_type=2, lock_id_r=0xffbef3ec)
at mbox-lock.c:534
ret = 905472
#7 0x3ef14 in mbox_sync (mbox=0xdd100, flags=44) at mbox-sync.c:1642
lock_type = 2
index_sync_ctx = (struct mail_index_sync_ctx *) 0x156e1302
sync...
2017 Jun 22
22
[PATCH v2 00/14] improve the fb_setcmap helper
Hi!
While trying to get CLUT support for the atmel_hlcdc driver, and
specifically for the emulated fbdev interface, I received some
push-back that my feeble in-driver attempts should be solved
by the core. This is my attempt to do it right.
I have obviously not tested all of this with more than a compile,
but patches 1 and 3 are enough to make the atmel-hlcdc driver
do what I need (when patched
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...tx->queue;
+ struct request_list *rl = &ctx->rl;
struct request *rq = NULL;
- struct request_list *rl = &q->rq;
struct elevator_type *et;
- struct io_context *ioc;
struct io_cq *icq = NULL;
const bool is_sync = rw_is_sync(rw_flags) != 0;
- bool retried = false;
+ const bool drop_lock = (gfp_mask & __GFP_WAIT) != 0;
+ struct io_context *ioc;
int may_queue;
-retry:
+
et = q->elevator->type;
ioc = current->io_context;
if (unlikely(blk_queue_dead(q)))
return NULL;
may_queue = elv_may_queue(q, rw_flags);
if (may_queue == ELV_MQUEUE_NO)
goto rq_star...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...tx->queue;
+ struct request_list *rl = &ctx->rl;
struct request *rq = NULL;
- struct request_list *rl = &q->rq;
struct elevator_type *et;
- struct io_context *ioc;
struct io_cq *icq = NULL;
const bool is_sync = rw_is_sync(rw_flags) != 0;
- bool retried = false;
+ const bool drop_lock = (gfp_mask & __GFP_WAIT) != 0;
+ struct io_context *ioc;
int may_queue;
-retry:
+
et = q->elevator->type;
ioc = current->io_context;
if (unlikely(blk_queue_dead(q)))
return NULL;
may_queue = elv_may_queue(q, rw_flags);
if (may_queue == ELV_MQUEUE_NO)
goto rq_star...