Displaying 20 results from an estimated 36 matches for "_lock".
Did you mean:
block
2019 Apr 23
1
[RFC: nbdkit PATCH] cleanup: Assert mutex sanity
...iff --git a/common/utils/cleanup.h b/common/utils/cleanup.h
index e6e6140..0ab9e65 100644
--- a/common/utils/cleanup.h
+++ b/common/utils/cleanup.h
@@ -43,6 +43,9 @@ extern void cleanup_unlock (pthread_mutex_t **ptr);
#define CLEANUP_UNLOCK __attribute__((cleanup (cleanup_unlock)))
#define ACQUIRE_LOCK_FOR_CURRENT_SCOPE(mutex) \
CLEANUP_UNLOCK pthread_mutex_t *_lock = mutex; \
- pthread_mutex_lock (_lock)
+ do { \
+ int _r = pthread_mutex_lock (_lock); \
+ assert (!_r); \
+ } while (0)
#endif /* NBDKIT_CLEANUP_H */
diff --git a/common/utils/cleanup.c b/common/utils/cleanup.c
index 1...
2019 Apr 23
0
[nbdkit PATCH 1/4] cleanup: Move cleanup.c to common
...__attribute__((cleanup (cleanup_free)))
+extern void cleanup_extents_free (void *ptr);
+#define CLEANUP_EXTENTS_FREE __attribute__((cleanup (cleanup_extents_free)))
+extern void cleanup_unlock (pthread_mutex_t **ptr);
+#define CLEANUP_UNLOCK __attribute__((cleanup (cleanup_unlock)))
+#define ACQUIRE_LOCK_FOR_CURRENT_SCOPE(mutex) \
+ CLEANUP_UNLOCK pthread_mutex_t *_lock = mutex; \
+ pthread_mutex_lock (_lock)
+
+#endif /* NBDKIT_CLEANUP_H */
diff --git a/server/internal.h b/server/internal.h
index 817f022..67fccfc 100644
--- a/server/internal.h
+++ b/server/internal.h
@@ -42,6 +42,7 @@
#define N...
2019 Apr 23
8
[nbdkit PATCH 0/4] Start using cleanup macros in filters/plugins
...utils, but it looks like the former is for things that are
inlineable via .h only, while the latter is when you need to link in
a convenience library, so this landed in the latter.
Eric Blake (4):
cleanup: Move cleanup.c to common
filters: Utilize CLEANUP_EXTENTS_FREE
filters: Utilize ACQUIRE_LOCK_FOR_CURRENT_SCOPE
plugins: Utilize ACQUIRE_LOCK_FOR_CURRENT_SCOPE
common/utils/cleanup.h | 48 ++++++++++++++++++++++++++++++
server/internal.h | 12 +-------
{server => common/utils}/cleanup.c | 5 ++--
filters/log/log.c | 10 +++----
filters/o...
2019 May 20
5
[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200
...ttm_operation_ctx ctx = { false, false };
I think would be good to have a lockdep_assert_held here for the ww_mutex.
Also general thing: _reserved is kinda ttm lingo, for dma-buf reservations
we call the structure tracking the fences+lock the "reservation", but the
naming scheme used is _lock/_unlock.
I think would be good to be consistent with that, and use _locked here.
Especially for a very simplified vram helper like this one I expect that's
going to lead to less wtf moments by driver writers :-)
Maybe we should also do a large-scale s/reserve/lock/ within ttm, to align
more w...
2019 May 20
5
[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200
...ttm_operation_ctx ctx = { false, false };
I think would be good to have a lockdep_assert_held here for the ww_mutex.
Also general thing: _reserved is kinda ttm lingo, for dma-buf reservations
we call the structure tracking the fences+lock the "reservation", but the
naming scheme used is _lock/_unlock.
I think would be good to be consistent with that, and use _locked here.
Especially for a very simplified vram helper like this one I expect that's
going to lead to less wtf moments by driver writers :-)
Maybe we should also do a large-scale s/reserve/lock/ within ttm, to align
more w...
2015 Nov 14
3
[lit] RFC: Per test timeout
Hi,
A feature I've wanted in lit for a while is a having a timeout per
test. Attached
are patches that implement this idea.
I'm e-mailing llvm-dev rather than llvm-commits
because I want to gather more feedback on my initial implementation and
hopefully some answers to some unresolved issues with my implementation.
Currently in lit you can set a global timeout for
all of the tests but
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...sible
> to crash the kernel with this, and it wouldn't be nice to fail if
> somebody decides to put VM_SHARED ext4 (we could easily allow vhost
> ring only backed by anon or tmpfs or hugetlbfs to solve this of
> course).
>
> It sounds like we should at least optimize away the _lock from
> set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
> was a clean way to do that.
>
> Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
> set_page_dirty_lock for such case: could you recap how that
> __writepage ext4 crash was solv...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...sible
> to crash the kernel with this, and it wouldn't be nice to fail if
> somebody decides to put VM_SHARED ext4 (we could easily allow vhost
> ring only backed by anon or tmpfs or hugetlbfs to solve this of
> course).
>
> It sounds like we should at least optimize away the _lock from
> set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
> was a clean way to do that.
>
> Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
> set_page_dirty_lock for such case: could you recap how that
> __writepage ext4 crash was solv...
2019 May 21
0
[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200
Hi,
> I think would be good to have a lockdep_assert_held here for the ww_mutex.
>
> Also general thing: _reserved is kinda ttm lingo, for dma-buf reservations
> we call the structure tracking the fences+lock the "reservation", but the
> naming scheme used is _lock/_unlock.
>
> I think would be good to be consistent with that, and use _locked here.
> Especially for a very simplified vram helper like this one I expect that's
> going to lead to less wtf moments by driver writers :-)
>
> Maybe we should also do a large-scale s/reserve/loc...
2005 Oct 17
1
Dovecot v1.0a3 on OpenBSD 3.7
...een trying to get Dovecot 1.0a3 running on OpenBSD 3.7, with little
luck. I'm getting the following:
Oct 16 17:00:50 mailtest dovecot:
pop3(testuser):open(/var/mail/.temp.mail.mailtest.com.7078.43c0f93e9fecb54a)
failed: Permission denied
Oct 16 17:00:50 mailtest dovecot: pop3(testuser): file_lock_dotlock() failed
with mbox file /var/mail/testuser: Permission denied
Oct 16 17:00:50 mailtest dovecot: pop3-login: Login: user=<testuser>,
method=PLAIN , rip=63.201.8.122, lip=64.4.143.26, TLS
Oct 16 17:00:50 mailtest dovecot: pop3(testuser): Mailbox init failed top=0/0,
retr=0/ del=0/0,...
2019 May 20
1
[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200
...gt;> I think would be good to have a lockdep_assert_held here for the ww_mutex.
>>
>> Also general thing: _reserved is kinda ttm lingo, for dma-buf reservations
>> we call the structure tracking the fences+lock the "reservation", but the
>> naming scheme used is _lock/_unlock.
>>
>> I think would be good to be consistent with that, and use _locked here.
>> Especially for a very simplified vram helper like this one I expect that's
>> going to lead to less wtf moments by driver writers :-)
>>
>> Maybe we should also do a lar...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...eneral it shouldn't be possible
to crash the kernel with this, and it wouldn't be nice to fail if
somebody decides to put VM_SHARED ext4 (we could easily allow vhost
ring only backed by anon or tmpfs or hugetlbfs to solve this of
course).
It sounds like we should at least optimize away the _lock from
set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
was a clean way to do that.
Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
set_page_dirty_lock for such case: could you recap how that
__writepage ext4 crash was solved if try_to_free_buffers() ru...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...e with write and if not then you force a write
> page fault.
So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
be called in such case. It only ever makes sense if the pte is
writable.
On a side note, the reason the write bit on the pte enabled avoids the
need of the _lock suffix is because of the stable page writeback
guarantees?
> Basicly from mmu notifier callback you have the same right as zap
> pte has.
Good point.
Related to this I already was wondering why the set_page_dirty is not
done in the invalidate. Reading the patch it looks like the dirty is
m...
2011 Jul 29
6
Re: Reg REMUS on two VMs
...ainInfo.resumeDomain(4)
[2011-07-29 09:05:17 5355] DEBUG (XendDomainInfo:3158)
XendDomainInfo.resumeDomain: completed
************************************************
On Thu, Jul 28, 2011 at 7:50 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca>wrote:
> check /var/lib/xen/suspend_evtchn_*_lock.d
> Make sure there are different lock files for each domain.
> And before starting, make sure there are no stray lock files.
>
> Try this litmus test first. (do both commands simultaneously, in two
> different
> terminals)
> terminal 1: xm save -c TestVM1 TestVM1.chkpt
> te...
2019 May 20
0
[PATCH 1/2] drm: Add drm_gem_vram_{pin/unpin}_reserved() and convert mgag200
...e, false };
>
> I think would be good to have a lockdep_assert_held here for the ww_mutex.
>
> Also general thing: _reserved is kinda ttm lingo, for dma-buf reservations
> we call the structure tracking the fences+lock the "reservation", but the
> naming scheme used is _lock/_unlock.
>
> I think would be good to be consistent with that, and use _locked here.
> Especially for a very simplified vram helper like this one I expect that's
> going to lead to less wtf moments by driver writers :-)
>
> Maybe we should also do a large-scale s/reserve/loc...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...force a write
>> page fault.
> So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
> be called in such case. It only ever makes sense if the pte is
> writable.
>
> On a side note, the reason the write bit on the pte enabled avoids the
> need of the _lock suffix is because of the stable page writeback
> guarantees?
>
>> Basicly from mmu notifier callback you have the same right as zap
>> pte has.
> Good point.
>
> Related to this I already was wondering why the set_page_dirty is not
> done in the invalidate. Reading the...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...force a write
>> page fault.
> So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
> be called in such case. It only ever makes sense if the pte is
> writable.
>
> On a side note, the reason the write bit on the pte enabled avoids the
> need of the _lock suffix is because of the stable page writeback
> guarantees?
>
>> Basicly from mmu notifier callback you have the same right as zap
>> pte has.
> Good point.
>
> Related to this I already was wondering why the set_page_dirty is not
> done in the invalidate. Reading the...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on page lock to mark page dirty.
> >
> > Could it happen that page writeback will check the
> > page, find it clean, and then you mark it dirty and then
> > invalidate callback is called?
> >
> >
>
> Yes....
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on page lock to mark page dirty.
> >
> > Could it happen that page writeback will check the
> > page, find it clean, and then you mark it dirty and then
> > invalidate callback is called?
> >
> >
>
> Yes....
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > > +{
> > > > + int i;
> > > > +
> > > > + for (i = 0; i < used->npages; i++)
> > > > + set_page_dirty_lock(used->pages[i]);
> > > This seems to rely on page lock to mark page dirty.
> > >
> > > Could it happen that page writeback will check the
> > > page, find it clean, and then you mark it dirty and then
> > > invalidate callback is called?
> > &g...