Displaying 16 results from an estimated 16 matches for "__ctx".
Did you mean:
x_ctx
2023 Jul 06
0
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
Hi Boris,
On 6/30/23 10:02, Boris Brezillon wrote:
> Hi Danilo,
>
> On Fri, 30 Jun 2023 00:25:18 +0200
> Danilo Krummrich <dakr at redhat.com> wrote:
>
>> + * int driver_gpuva_remap(struct drm_gpuva_op *op, void *__ctx)
>> + * {
>> + * struct driver_context *ctx = __ctx;
>> + *
>> + * drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
>> + *
>> + * drm_gpuva_unlink(op->remap.unmap->va);
>> + * kfree(op->remap.unmap->va);
>> + *
>...
2018 Jan 10
1
[PATCH 1/6] Documentation: crypto: document crypto engine API
...ine_reqctx
> +struct your_tfm_ctx {
> + struct crypto_engine_reqctx enginectx;
> + ...
> +};
> +Why: Since CE manage only crypto_async_request, it cannot know the underlying
> +request_type and so have access only on the TFM.
> +So using container_of for accessing __ctx is impossible.
> +Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
> +so it must assume that crypto_engine_reqctx is at start of it.
> +
> +Order of operations
> +-------------------
> +You have to obtain a struct crypto_engine via crypto_engine_allo...
2017 Nov 21
0
4.14: WARNING: CPU: 4 PID: 2895 at block/blk-mq.c:1144 with virtio-blk (also 4.12 stable)
...static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie);
static void blk_mq_poll_stats_start(struct request_queue *q);
static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
@@ -2114,8 +2117,8 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
INIT_LIST_HEAD(&__ctx->rq_list);
__ctx->queue = q;
- /* If the cpu isn't present, the cpu is mapped to first hctx */
- if (!cpu_present(i))
+ /* If the cpu isn't online, the cpu is mapped to first hctx */
+ if (!cpu_online(i))
continue;
hctx = blk_mq_map_queue(q, i);
@@ -2158,7 +2161,8 @@...
2017 Nov 21
2
4.14: WARNING: CPU: 4 PID: 2895 at block/blk-mq.c:1144 with virtio-blk (also 4.12 stable)
On 11/21/2017 07:39 PM, Jens Axboe wrote:
> On 11/21/2017 11:27 AM, Jens Axboe wrote:
>> On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/21/2017 07:09 PM, Jens Axboe wrote:
>>>> On 11/21/2017 10:27 AM, Jens Axboe wrote:
>>>>> On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
>>>>>> Bisect
2017 Nov 21
2
4.14: WARNING: CPU: 4 PID: 2895 at block/blk-mq.c:1144 with virtio-blk (also 4.12 stable)
On 11/21/2017 07:39 PM, Jens Axboe wrote:
> On 11/21/2017 11:27 AM, Jens Axboe wrote:
>> On 11/21/2017 11:12 AM, Christian Borntraeger wrote:
>>>
>>>
>>> On 11/21/2017 07:09 PM, Jens Axboe wrote:
>>>> On 11/21/2017 10:27 AM, Jens Axboe wrote:
>>>>> On 11/21/2017 03:14 AM, Christian Borntraeger wrote:
>>>>>> Bisect
2018 Jan 03
0
[PATCH 1/6] Documentation: crypto: document crypto engine API
...your tfm_ctx the struct crypto_engine_reqctx
+struct your_tfm_ctx {
+ struct crypto_engine_reqctx enginectx;
+ ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_reqctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via cr...
2017 Nov 21
2
4.14: WARNING: CPU: 4 PID: 2895 at block/blk-mq.c:1144 with virtio-blk (also 4.12 stable)
...oll(struct request_queue *q, blk_qc_t cookie);
> static void blk_mq_poll_stats_start(struct request_queue *q);
> static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
> @@ -2114,8 +2117,8 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
> INIT_LIST_HEAD(&__ctx->rq_list);
> __ctx->queue = q;
>
> - /* If the cpu isn't present, the cpu is mapped to first hctx */
> - if (!cpu_present(i))
> + /* If the cpu isn't online, the cpu is mapped to first hctx */
> + if (!cpu_online(i))
> continue;
>
> hctx = bl...
2017 Nov 21
2
4.14: WARNING: CPU: 4 PID: 2895 at block/blk-mq.c:1144 with virtio-blk (also 4.12 stable)
...oll(struct request_queue *q, blk_qc_t cookie);
> static void blk_mq_poll_stats_start(struct request_queue *q);
> static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
> @@ -2114,8 +2117,8 @@ static void blk_mq_init_cpu_queues(struct request_queue *q,
> INIT_LIST_HEAD(&__ctx->rq_list);
> __ctx->queue = q;
>
> - /* If the cpu isn't present, the cpu is mapped to first hctx */
> - if (!cpu_present(i))
> + /* If the cpu isn't online, the cpu is mapped to first hctx */
> + if (!cpu_online(i))
> continue;
>
> hctx = bl...
2023 Jun 29
3
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...+ * driver_lock_va_space();
+ * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
+ * driver_unlock_va_space();
+ *
+ * out:
+ * kfree(ctx.new_va);
+ * kfree(ctx.prev_va);
+ * kfree(ctx.next_va);
+ * return ret;
+ * }
+ *
+ * int driver_gpuva_map(struct drm_gpuva_op *op, void *__ctx)
+ * {
+ * struct driver_context *ctx = __ctx;
+ *
+ * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
+ *
+ * drm_gpuva_link(ctx->new_va);
+ *
+ * // prevent the new GPUVA from being freed in
+ * // driver_mapping_create()
+ * ctx->new_va = NULL;
+ *
+ * return 0;
+ * }
+...
2023 Jul 13
1
[PATCH drm-next v7 02/13] drm: manager to keep track of GPUs VA mappings
...+ * driver_lock_va_space();
+ * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
+ * driver_unlock_va_space();
+ *
+ * out:
+ * kfree(ctx.new_va);
+ * kfree(ctx.prev_va);
+ * kfree(ctx.next_va);
+ * return ret;
+ * }
+ *
+ * int driver_gpuva_map(struct drm_gpuva_op *op, void *__ctx)
+ * {
+ * struct driver_context *ctx = __ctx;
+ *
+ * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
+ *
+ * drm_gpuva_link(ctx->new_va);
+ *
+ * // prevent the new GPUVA from being freed in
+ * // driver_mapping_create()
+ * ctx->new_va = NULL;
+ *
+ * return 0;
+ * }
+...
2023 Jul 20
2
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...+ * driver_lock_va_space();
+ * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
+ * driver_unlock_va_space();
+ *
+ * out:
+ * kfree(ctx.new_va);
+ * kfree(ctx.prev_va);
+ * kfree(ctx.next_va);
+ * return ret;
+ * }
+ *
+ * int driver_gpuva_map(struct drm_gpuva_op *op, void *__ctx)
+ * {
+ * struct driver_context *ctx = __ctx;
+ *
+ * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
+ *
+ * drm_gpuva_link(ctx->new_va);
+ *
+ * // prevent the new GPUVA from being freed in
+ * // driver_mapping_create()
+ * ctx->new_va = NULL;
+ *
+ * return 0;
+ * }
+...
2018 Jan 03
11
[PATCH 0/6] crypto: engine - Permit to enqueue all async requests
Hello
The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.
This patchset remove all algs specific stuff and now only process generic crypto_async_request.
The requests handler function pointer are now moved out of struct
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...+{
+ BUG_ON(nr >= q->nr_queues);
+
+ return &q->queue_ctx[nr];
+}
+
+#define queue_for_each_ctx(q, ctx, i) \
+ for (i = 0, ctx = &(q)->queue_ctx[0]; \
+ i < (q)->nr_queues; i++, ctx++) \
+
+#define blk_ctx_sum(q, sum) \
+({ \
+ struct blk_queue_ctx *__ctx; \
+ unsigned int __ret = 0, __i; \
+ \
+ queue_for_each_ctx((q), __ctx, __i) \
+ __ret += sum; \
+ __ret; \
+})
+
+static inline int __queue_in_flight(struct request_queue *q, int index)
+{
+ return blk_ctx_sum(q, __ctx->in_flight[index]);
+}
+
+static inline int...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...+{
+ BUG_ON(nr >= q->nr_queues);
+
+ return &q->queue_ctx[nr];
+}
+
+#define queue_for_each_ctx(q, ctx, i) \
+ for (i = 0, ctx = &(q)->queue_ctx[0]; \
+ i < (q)->nr_queues; i++, ctx++) \
+
+#define blk_ctx_sum(q, sum) \
+({ \
+ struct blk_queue_ctx *__ctx; \
+ unsigned int __ret = 0, __i; \
+ \
+ queue_for_each_ctx((q), __ctx, __i) \
+ __ret += sum; \
+ __ret; \
+})
+
+static inline int __queue_in_flight(struct request_queue *q, int index)
+{
+ return blk_ctx_sum(q, __ctx->in_flight[index]);
+}
+
+static inline int...
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello
The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.
This patchset remove all algs specific stuff and now only process generic crypto_async_request.
The requests handler function pointer are now moved out of struct
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello
The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.
This patchset remove all algs specific stuff and now only process generic crypto_async_request.
The requests handler function pointer are now moved out of struct