search for: wait_count

Displaying 9 results from an estimated 9 matches for "wait_count".

Did you mean: part_count
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...* by the kernel. >> + * >> + * If this flag is not supplied the kernel executes the associated operations >> + * synchronously and doesn't accept any &drm_nouveau_sync objects. >> + */ >> +#define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 >> + /** >> + * @wait_count: the number of wait &drm_nouveau_syncs >> + */ >> + __u32 wait_count; >> + /** >> + * @sig_count: the number of &drm_nouveau_syncs to signal when finished >> + */ >> + __u32 sig_count; >> + /** >> + * @wait_ptr: pointer to &drm_nouveau...
2023 Jan 27
1
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...+ * If this flag is not supplied the kernel executes the associated >>> operations >>> + * synchronously and doesn't accept any &drm_nouveau_sync objects. >>> + */ >>> +#define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 >>> +??? /** >>> +???? * @wait_count: the number of wait &drm_nouveau_syncs >>> +???? */ >>> +??? __u32 wait_count; >>> +??? /** >>> +???? * @sig_count: the number of &drm_nouveau_syncs to signal when >>> finished >>> +???? */ >>> +??? __u32 sig_count; >>&gt...
2023 Jul 25
1
[PATCH drm-misc-next v8 03/12] drm/nouveau: new VM_BIND uapi interfaces
...; + * > + * If this flag is not supplied the kernel executes the associated > operations > + * synchronously and doesn't accept any &drm_nouveau_sync objects. > + */ > +#define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 > +? ? ? ?/** > +? ? ? ? * @wait_count: the number of wait &drm_nouveau_syncs > +? ? ? ? */ > +? ? ? ?__u32 wait_count; > +? ? ? ?/** > +? ? ? ? * @sig_count: the number of &drm_nouveau_syncs to signal > when finished > +? ? ? ? */ > +? ? ? ?__u32 sig_count; > +? ? ? ?/** &...
2023 Jan 27
0
[PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces
...+ * If this flag is not supplied the kernel executes the associated operations >>>> + * synchronously and doesn't accept any &drm_nouveau_sync objects. >>>> + */ >>>> +#define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 >>>> + /** >>>> + * @wait_count: the number of wait &drm_nouveau_syncs >>>> + */ >>>> + __u32 wait_count; >>>> + /** >>>> + * @sig_count: the number of &drm_nouveau_syncs to signal when finished >>>> + */ >>>> + __u32 sig_count; >>>> +...
2012 Sep 12
2
Deadlock in btrfs-cleaner, related to snapshot deletion
...: 0.000000 [ 386.320063] .se->statistics.exec_max : 1.952409 [ 386.320067] .se->statistics.slice_max : 0.000000 [ 386.320070] .se->statistics.wait_max : 0.029838 [ 386.320073] .se->statistics.wait_sum : 0.036460 [ 386.320076] .se->statistics.wait_count : 32 [ 386.320079] .se->load.weight : 1024 [ 386.320083] [ 386.320083] cfs_rq[0]:/ [ 386.320087] .exec_clock : 23614.135229 [ 386.320090] .MIN_vruntime : 23465.802624 [ 386.320094] .min_vruntime : 23467.274189...
2023 Dec 25
2
[PATCH -next] drm/nouveau: uapi: fix kerneldoc warnings
...the given VM_BIND operation should be executed asynchronously - * by the kernel. - * - * If this flag is not supplied the kernel executes the associated operations - * synchronously and doesn't accept any &drm_nouveau_sync objects. - */ #define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 /** * @wait_count: the number of wait &drm_nouveau_syncs -- 2.34.1
2012 Jul 31
2
Btrfs Intermittent ENOSPC Issues
I''ve been working on running down intermittent ENOSPC issues. I can only seem to replicate ENOSPC errors when running zlib compression. However, I have been seeing similar ENOSPC errors to a lesser extent when playing with the LZ4HC patches. I apologize for not following up on this sooner, but I had drifted away from using zlib, and didn''t notice there was still an issue. My
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2012 Aug 01
7
[PATCH] Btrfs: barrier before waitqueue_active
We need an smb_mb() before waitqueue_active to avoid missing wakeups. Before Mitch was hitting a deadlock between the ordered flushers and the transaction commit because the ordered flushers were waiting for more refs and were never woken up, so those smp_mb()''s are the most important. Everything else I added for correctness sake and to avoid getting bitten by this again somewhere else.