Displaying 11 results from an estimated 11 matches for "ustart".
Did you mean:
start
2019 Jan 04
1
[RFC PATCH V3 5/5] vhost: access vq metadata through kernel virtual address
...ct vhost_vmap *map)
> +{
> + if (map->addr)
> + vunmap(map->unmap_addr);
> +
> + map->addr = NULL;
> + map->unmap_addr = NULL;
> +}
> +
> +static int vhost_invalidate_vmap(struct vhost_virtqueue *vq,
> + struct vhost_vmap *map,
> + unsigned long ustart,
> + size_t size,
> + unsigned long start,
> + unsigned long end,
> + bool blockable)
> +{
> + if (end < ustart || start > ustart - 1 + size)
> + return 0;
> +
> + if (!blockable)
> + return -EAGAIN;
> +
> + mutex_lock(&vq->mutex);...
2018 Dec 29
0
[RFC PATCH V3 5/5] vhost: access vq metadata through kernel virtual address
...t;desc) * num;
}
+static void vhost_uninit_vmap(struct vhost_vmap *map)
+{
+ if (map->addr)
+ vunmap(map->unmap_addr);
+
+ map->addr = NULL;
+ map->unmap_addr = NULL;
+}
+
+static int vhost_invalidate_vmap(struct vhost_virtqueue *vq,
+ struct vhost_vmap *map,
+ unsigned long ustart,
+ size_t size,
+ unsigned long start,
+ unsigned long end,
+ bool blockable)
+{
+ if (end < ustart || start > ustart - 1 + size)
+ return 0;
+
+ if (!blockable)
+ return -EAGAIN;
+
+ mutex_lock(&vq->mutex);
+ vhost_uninit_vmap(map);
+ mutex_unlock(&vq->mutex);...
2011 Jun 01
3
error in model specification for cfa with lavaan-package
...positiv definit
Fehler in Sample(data = data, group = group, sample.cov = sample.cov, sample.mean = sample.mean, : sample covariance can not be inverted"
Then I tried to "lavaanify" my model specification first
cfa.model<- lavaanify(cfa.model)
id lhs op rhs user group free ustart fixed.x label eq.id free.uncon
1 1 f1 =~ x1 1 1 1 NA 0 f1=~x1 0 1
2 2 f1 =~ x2 1 1 2 NA 0 f1=~x2 0 2
3 3 f1 =~ x3 1 1 3 NA 0 f1=~x3 0 3
4 4 f1 =~ x4 1 1 4 NA 0...
2019 Mar 06
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...(map->addr) {
+ vunmap(map->unmap_addr);
+ kfree(map->pages);
+ map->pages = NULL;
+ map->npages = 0;
+ }
+
+ map->addr = NULL;
+ map->unmap_addr = NULL;
+}
+
+static void vhost_invalidate_vmap(struct vhost_virtqueue *vq,
+ struct vhost_vmap *map,
+ unsigned long ustart,
+ size_t size,
+ unsigned long start,
+ unsigned long end)
+{
+ if (end < ustart || start > ustart - 1 + size)
+ return;
+
+ dump_stack();
+ mutex_lock(&vq->mutex);
+ vhost_uninit_vmap(map);
+ mutex_unlock(&vq->mutex);
+}
+
+
+static void vhost_invalidate(struct...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ee(map->pages);
> + map->pages = NULL;
> + map->npages = 0;
> + }
> +
> + map->addr = NULL;
> + map->unmap_addr = NULL;
> +}
> +
> +static void vhost_invalidate_vmap(struct vhost_virtqueue *vq,
> + struct vhost_vmap *map,
> + unsigned long ustart,
> + size_t size,
> + unsigned long start,
> + unsigned long end)
> +{
> + if (end < ustart || start > ustart - 1 + size)
> + return;
> +
> + dump_stack();
> + mutex_lock(&vq->mutex);
> + vhost_uninit_vmap(map);
> + mutex_unlock(&vq-...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ee(map->pages);
> + map->pages = NULL;
> + map->npages = 0;
> + }
> +
> + map->addr = NULL;
> + map->unmap_addr = NULL;
> +}
> +
> +static void vhost_invalidate_vmap(struct vhost_virtqueue *vq,
> + struct vhost_vmap *map,
> + unsigned long ustart,
> + size_t size,
> + unsigned long start,
> + unsigned long end)
> +{
> + if (end < ustart || start > ustart - 1 + size)
> + return;
> +
> + dump_stack();
> + mutex_lock(&vq->mutex);
> + vhost_uninit_vmap(map);
> + mutex_unlock(&vq-...
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Changes from V2:
- fix buggy range overlapping check
- tear down MMU notifier during vhost ioctl to make sure
2018 Dec 29
12
[RFC PATCH V3 0/5] Hi:
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling.
Test shows about 24% improvement on TX PPS. It should benefit other
cases as well.
Changes from V2:
- fix buggy range overlapping check
- tear down MMU notifier during vhost ioctl to make sure
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling. This is done through setup kernel address through vmap() and
resigter MMU notifier for invalidation.
Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
obvious improvement.
2019 Mar 06
12
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too much
overheads like checks, spec barriers or even hardware feature
toggling. This is done through setup kernel address through vmap() and
resigter MMU notifier for invalidation.
Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
obvious improvement.
2023 Aug 20
3
[PATCH drm-misc-next 0/3] [RFC] DRM GPUVA Manager GPU-VM features
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM