search for: pending_list

Displaying 20 results from an estimated 60 matches for "pending_list".

Did you mean: pending_bits
2008 Mar 10
12
[RFC][PATCH] Use ioemu block drivers through blktap
When I submitted the qcow2 patch for blktap, suggestions came up that the qemu block drivers should be used also for blktap to eliminate the current code duplication in ioemu and blktap. The attached patch adds support for a tap:ioemu pseudo driver. Devices using this driver won''t use tapdisk (containing the code duplication) any more, but will connect to the qemu-dm of the domain. In
2011 Mar 28
22
[PATCH 00/22] Staging: hv: Cleanup-storage-drivers-phase-III
This patch-set deals with some of the style isues in blkvsc_drv.c. We also get rid most of the "dead code" in this file: 1) Get rid of most of the forward declarations in this file. The only remaining forward declarations are to deal with circular dependencies. 2) Get rid of most of the dead code in the file. Some of the functions in this file are place holders - they
2011 Mar 28
22
[PATCH 00/22] Staging: hv: Cleanup-storage-drivers-phase-III
This patch-set deals with some of the style isues in blkvsc_drv.c. We also get rid most of the "dead code" in this file: 1) Get rid of most of the forward declarations in this file. The only remaining forward declarations are to deal with circular dependencies. 2) Get rid of most of the dead code in the file. Some of the functions in this file are place holders - they
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...y to be unmapped, but also the backing PT[E]s to be freed before it can even allocate the PT[E]s for the new memory backed mappings. Now, let's have a look what the gpu schedulers main loop does. Before picking the next entity to schedule a job for, it tries to fetch the first job from the pending_list and checks whether its dma-fence is signaled already and whether the job can be cleaned up. Subsequent jobs on the pending_list are not taken into consideration. Hence, it might well be that the first job on the pending_list isn't signaled yet, but subsequent jobs are and hence *could* be c...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...o > the backing PT[E]s to be freed before it can even allocate the PT[E]s > for the new memory backed mappings. > > Now, let's have a look what the gpu schedulers main loop does. Before > picking the next entity to schedule a job for, it tries to fetch the > first job from the pending_list and checks whether its dma-fence is > signaled already and whether the job can be cleaned up. Subsequent jobs > on the pending_list are not taken into consideration. Hence, it might > well be that the first job on the pending_list isn't signaled yet, but > subsequent jobs are and he...
2011 Apr 04
18
[PATCH 00/22] Staging: hv: Cleanup storage drivers - Phase IV
More cleanup. In this patch-set we deal with the following issues: 1) While a Linux guest on Hyper-V can be assigned removable media devices (DVD, floppy etc), these devices are not handled by the Hyper-V block driver. So, we cleanup all the dead code dealing with removable media devices. 2) There were multiple functions to retrieve information about the device. Since much of
2011 Apr 04
18
[PATCH 00/22] Staging: hv: Cleanup storage drivers - Phase IV
More cleanup. In this patch-set we deal with the following issues: 1) While a Linux guest on Hyper-V can be assigned removable media devices (DVD, floppy etc), these devices are not handled by the Hyper-V block driver. So, we cleanup all the dead code dealing with removable media devices. 2) There were multiple functions to retrieve information about the device. Since much of
2011 Apr 06
20
[RESEND][PATCH 00/22] Staging: hv: Cleanup storage drivers - Phase IV
The latest upstream merge changed struct block_device_operations: This merge got rid of blkvsc_media_changed and introduced the function blkvsc_check_events. This broke all the patches that were sent after the tree was closed the last time. This is a resend of this patch-set to account for this change in the kernel. More cleanup. In this patch-set we deal with the following issues: 1) While a
2011 Apr 06
20
[RESEND][PATCH 00/22] Staging: hv: Cleanup storage drivers - Phase IV
The latest upstream merge changed struct block_device_operations: This merge got rid of blkvsc_media_changed and introduced the function blkvsc_check_events. This broke all the patches that were sent after the tree was closed the last time. This is a resend of this patch-set to account for this change in the kernel. More cleanup. In this patch-set we deal with the following issues: 1) While a
2011 Apr 22
13
[RESEND] [PATCH 00/18] Staging: hv: Cleanup-storage-drivers-phase-III
This is a resend of a previously sent patch-set. This patch-set deals with some of the style isues in blkvsc_drv.c. We also get rid most of the "dead code" in this file: 1) Get rid of most of the forward declarations in this file. The only remaining forward declarations are to deal with circular dependencies. 2) Get rid of most of the dead code in the file. Some of the
2011 Apr 22
13
[RESEND] [PATCH 00/18] Staging: hv: Cleanup-storage-drivers-phase-III
This is a resend of a previously sent patch-set. This patch-set deals with some of the style isues in blkvsc_drv.c. We also get rid most of the "dead code" in this file: 1) Get rid of most of the forward declarations in this file. The only remaining forward declarations are to deal with circular dependencies. 2) Get rid of most of the dead code in the file. Some of the
2016 Jun 22
0
[PATCH 3/3] vhost: device IOTLB API
...NULL; dev->log_file = NULL; dev->umem = NULL; + dev->iotlb = NULL; dev->mm = NULL; spin_lock_init(&dev->work_lock); INIT_LIST_HEAD(&dev->work_list); + init_waitqueue_head(&dev->wait); + INIT_LIST_HEAD(&dev->read_list); + INIT_LIST_HEAD(&dev->pending_list); + spin_lock_init(&dev->iotlb_lock); dev->worker = NULL; for (i = 0; i < dev->nvqs; ++i) { @@ -563,6 +573,15 @@ void vhost_dev_stop(struct vhost_dev *dev) } EXPORT_SYMBOL_GPL(vhost_dev_stop); +static void vhost_umem_free(struct vhost_umem *umem, + struct vhost_umem_...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...g PT[E]s to be freed before it can even allocate the PT[E]s > for the new memory backed mappings. > > Now, let's have a look what the gpu schedulers main loop does. Before > picking the next entity to schedule a job for, it tries to fetch the > first job from the pending_list and checks whether its dma-fence is > signaled already and whether the job can be cleaned up. Subsequent jobs > on the pending_list are not taken into consideration. Hence, it might > well be that the first job on the pending_list isn't signaled yet, but > subsequent...
2013 Sep 05
12
[PATCH 0/5] Memory leaks amended
...c reviewing. Based on Daivd''s branch ''integration-20130903''. Gui Hecheng (5): btrfs-progs:free local variable buf upon unsuccessful returns btrfs-progs:local variable memory freed btrfs-progs: missing tree-freeing statements added btrfs-progs:free the local list pending_list in btrfs_scan_one_dir btrfs-progs:free strdup()s that are not freed btrfs-image.c | 2 ++ cmds-check.c | 5 +++++ cmds-subvolume.c | 48 ++++++++++++++++++++++++++++++++++-------------- mkfs.c | 12 +++++++++++- utils.c | 10 ++++++++-- 5 files changed, 60 insertions...
2019 Jun 05
0
[vhost:linux-next 12/19] drivers/vhost/vhost.h:196:22: error: field 'mmu_notifier' has incomplete type
...queue **vqs; 199 int nvqs; 200 struct eventfd_ctx *log_ctx; 201 struct llist_head work_list; 202 struct task_struct *worker; 203 struct vhost_umem *umem; 204 struct vhost_umem *iotlb; 205 spinlock_t iotlb_lock; 206 struct list_head read_list; 207 struct list_head pending_list; 208 wait_queue_head_t wait; 209 int iov_limit; 210 int weight; 211 int byte_weight; 212 }; 213 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -------------- next...
2019 Jun 06
2
[PATCH] vhost: Don't use defined in VHOST_ARCH_CAN_ACCEL_UACCESS definition
...->invalidate_count = 0; __vhost_vq_meta_reset(vq); -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_ARCH_CAN_ACCEL_UACCESS vhost_reset_vq_maps(vq); #endif } @@ -635,7 +635,7 @@ void vhost_dev_init(struct vhost_dev *dev, INIT_LIST_HEAD(&dev->read_list); INIT_LIST_HEAD(&dev->pending_list); spin_lock_init(&dev->iotlb_lock); -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_ARCH_CAN_ACCEL_UACCESS vhost_init_maps(dev); #endif @@ -726,7 +726,7 @@ long vhost_dev_set_owner(struct vhost_dev *dev) if (err) goto err_cgroup; -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_...
2019 Jun 06
2
[PATCH] vhost: Don't use defined in VHOST_ARCH_CAN_ACCEL_UACCESS definition
...->invalidate_count = 0; __vhost_vq_meta_reset(vq); -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_ARCH_CAN_ACCEL_UACCESS vhost_reset_vq_maps(vq); #endif } @@ -635,7 +635,7 @@ void vhost_dev_init(struct vhost_dev *dev, INIT_LIST_HEAD(&dev->read_list); INIT_LIST_HEAD(&dev->pending_list); spin_lock_init(&dev->iotlb_lock); -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_ARCH_CAN_ACCEL_UACCESS vhost_init_maps(dev); #endif @@ -726,7 +726,7 @@ long vhost_dev_set_owner(struct vhost_dev *dev) if (err) goto err_cgroup; -#if VHOST_ARCH_CAN_ACCEL_UACCESS +#ifdef VHOST_...
2017 Mar 07
2
[PATCH] vhost: Move vhost.h to allow vhost driver out-of-tree compilation
...ex mutex; - struct vhost_virtqueue **vqs; - int nvqs; - struct file *log_file; - struct eventfd_ctx *log_ctx; - struct llist_head work_list; - struct task_struct *worker; - struct vhost_umem *umem; - struct vhost_umem *iotlb; - spinlock_t iotlb_lock; - struct list_head read_list; - struct list_head pending_list; - wait_queue_head_t wait; -}; - -void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs); -long vhost_dev_set_owner(struct vhost_dev *dev); -bool vhost_dev_has_owner(struct vhost_dev *dev); -long vhost_dev_check_owner(struct vhost_dev *); -struct vhost_umem *vhost_dev_reset...
2017 Mar 07
2
[PATCH] vhost: Move vhost.h to allow vhost driver out-of-tree compilation
...ex mutex; - struct vhost_virtqueue **vqs; - int nvqs; - struct file *log_file; - struct eventfd_ctx *log_ctx; - struct llist_head work_list; - struct task_struct *worker; - struct vhost_umem *umem; - struct vhost_umem *iotlb; - spinlock_t iotlb_lock; - struct list_head read_list; - struct list_head pending_list; - wait_queue_head_t wait; -}; - -void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs); -long vhost_dev_set_owner(struct vhost_dev *dev); -bool vhost_dev_has_owner(struct vhost_dev *dev); -long vhost_dev_check_owner(struct vhost_dev *); -struct vhost_umem *vhost_dev_reset...
2016 Jun 23
3
[PATCH V2 0/3] basic device IOTLB support for vhost_net
This patch tries to implement an device IOTLB for vhost. This could be used with for co-operation with userspace IOMMU implementation (qemu) for a secure DMA environment (DMAR) in guest. The idea is simple. When vhost meets an IOTLB miss, it will request the assistance of userspace to do the translation, this is done through: - when there's a IOTLB miss, it will notify userspace through