search for: write_queue

Displaying 18 results from an estimated 18 matches for "write_queue".

Did you mean: write_queues
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...e of the patch set is we would be able to enable more queues when there is limited number of vectors. Another use case we may classify queues as hight priority or low priority as mentioned by Cornelia. For virtio-blk, we may extend virtio-blk based on this patch set to enable something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1). Yet, the question I am asking in this email thread is for a difference scenario. The issue is not we are not having enough vectors (although this is why only 1 vector is allocated for all virtio-blk queues). As so far virtio-blk has (set->nr_...
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...e of the patch set is we would be able to enable more queues when there is limited number of vectors. Another use case we may classify queues as hight priority or low priority as mentioned by Cornelia. For virtio-blk, we may extend virtio-blk based on this patch set to enable something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1). Yet, the question I am asking in this email thread is for a difference scenario. The issue is not we are not having enough vectors (although this is why only 1 vector is allocated for all virtio-blk queues). As so far virtio-blk has (set->nr_...
2010 Apr 07
0
[RFC] vhost-blk implementation (v2)
...+struct vhost_blk_io { + struct list_head list; + struct work_struct work; + struct vhost_blk *blk; + struct file *file; + int head; + uint32_t type; + uint32_t nvecs; + uint64_t sector; + uint64_t len; + struct iovec iov[0]; +}; + +static struct workqueue_struct *vblk_workqueue; +static LIST_HEAD(write_queue); +static LIST_HEAD(read_queue); + +static void handle_io_work(struct work_struct *work) +{ + struct vhost_blk_io *vbio, *entry; + struct vhost_virtqueue *vq; + struct vhost_blk *blk; + struct list_head single, *head, *node, *tmp; + + int i, need_free, ret = 0; + loff_t pos; + uint8_t status = 0; +...
2010 Apr 07
0
[RFC] vhost-blk implementation (v2)
...+struct vhost_blk_io { + struct list_head list; + struct work_struct work; + struct vhost_blk *blk; + struct file *file; + int head; + uint32_t type; + uint32_t nvecs; + uint64_t sector; + uint64_t len; + struct iovec iov[0]; +}; + +static struct workqueue_struct *vblk_workqueue; +static LIST_HEAD(write_queue); +static LIST_HEAD(read_queue); + +static void handle_io_work(struct work_struct *work) +{ + struct vhost_blk_io *vbio, *entry; + struct vhost_virtqueue *vq; + struct vhost_blk *blk; + struct list_head single, *head, *node, *tmp; + + int i, need_free, ret = 0; + loff_t pos; + uint8_t status = 0; +...
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >>> Please note that this is pci-specific... >>> >>>> >>>> >>>> This is because the max number of queues is not limited by the number of >>>> possible cpus. >>>> >>>> By default, nvme (regardless about write_queues and poll_queues) and >>>> xen-blkfront limit the number of queues with num_possible_cpus(). >>> >>> ...and these are probably pci-specific as well. >> >> Not pci-specific, but per-cpu as well. > > Ah, I meant that those are pci devices. > &g...
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >>> Please note that this is pci-specific... >>> >>>> >>>> >>>> This is because the max number of queues is not limited by the number of >>>> possible cpus. >>>> >>>> By default, nvme (regardless about write_queues and poll_queues) and >>>> xen-blkfront limit the number of queues with num_possible_cpus(). >>> >>> ...and these are probably pci-specific as well. >> >> Not pci-specific, but per-cpu as well. > > Ah, I meant that those are pci devices. > &g...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...-edge virtio0-req.2 28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3 ... ... In above case, there is one msix vector per queue. This is because the max number of queues is not limited by the number of possible cpus. By default, nvme (regardless about write_queues and poll_queues) and xen-blkfront limit the number of queues with num_possible_cpus(). Is this by design on purpose, or can we fix with below? diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4bc083b..df95ce3 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...-edge virtio0-req.2 28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3 ... ... In above case, there is one msix vector per queue. This is because the max number of queues is not limited by the number of possible cpus. By default, nvme (regardless about write_queues and poll_queues) and xen-blkfront limit the number of queues with num_possible_cpus(). Is this by design on purpose, or can we fix with below? diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4bc083b..df95ce3 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >> In above case, there is one msix vector per queue. > > Please note that this is pci-specific... > >> >> >> This is because the max number of queues is not limited by the number of >> possible cpus. >> >> By default, nvme (regardless about write_queues and poll_queues) and >> xen-blkfront limit the number of queues with num_possible_cpus(). > > ...and these are probably pci-specific as well. Not pci-specific, but per-cpu as well. > >> >> >> Is this by design on purpose, or can we fix with below? >> >...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >> In above case, there is one msix vector per queue. > > Please note that this is pci-specific... > >> >> >> This is because the max number of queues is not limited by the number of >> possible cpus. >> >> By default, nvme (regardless about write_queues and poll_queues) and >> xen-blkfront limit the number of queues with num_possible_cpus(). > > ...and these are probably pci-specific as well. Not pci-specific, but per-cpu as well. > >> >> >> Is this by design on purpose, or can we fix with below? >> >...
2019 Mar 15
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...;>> Please note that this is pci-specific... >>>> >>>>> >>>>> This is because the max number of queues is not limited by the number of >>>>> possible cpus. >>>>> >>>>> By default, nvme (regardless about write_queues and poll_queues) and >>>>> xen-blkfront limit the number of queues with num_possible_cpus(). >>>> ...and these are probably pci-specific as well. >>> Not pci-specific, but per-cpu as well. >> Ah, I meant that those are pci devices. >> >>>&gt...
2019 Mar 12
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...0-edge virtio0-req.3 > ... ... > > In above case, there is one msix vector per queue. Please note that this is pci-specific... > > > This is because the max number of queues is not limited by the number of > possible cpus. > > By default, nvme (regardless about write_queues and poll_queues) and > xen-blkfront limit the number of queues with num_possible_cpus(). ...and these are probably pci-specific as well. > > > Is this by design on purpose, or can we fix with below? > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk...
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...0 0 0 PCI-MSI 65540-edge virtio0-req.3 > ... ... > > In above case, there is one msix vector per queue. > > > This is because the max number of queues is not limited by the number of > possible cpus. > > By default, nvme (regardless about write_queues and poll_queues) and > xen-blkfront limit the number of queues with num_possible_cpus(). > > > Is this by design on purpose, or can we fix with below? > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 4bc083b..df95ce3 100644 > --- a/d...
2019 Mar 20
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...able to enable more queues when > there is limited number of vectors. > > Another use case we may classify queues as hight priority or low priority as > mentioned by Cornelia. > > For virtio-blk, we may extend virtio-blk based on this patch set to enable > something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1). > > > Yet, the question I am asking in this email thread is for a difference scenario. > > The issue is not we are not having enough vectors (although this is why only 1 > vector is allocated for all virtio-blk queues). As so far...
2019 Mar 13
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...ctor per queue. > > > > Please note that this is pci-specific... > > > >> > >> > >> This is because the max number of queues is not limited by the number of > >> possible cpus. > >> > >> By default, nvme (regardless about write_queues and poll_queues) and > >> xen-blkfront limit the number of queues with num_possible_cpus(). > > > > ...and these are probably pci-specific as well. > > Not pci-specific, but per-cpu as well. Ah, I meant that those are pci devices. > > > > >>...
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...nsport. - The block layer limits the number of hw queues to the number of vcpus. This applies only to virtio devices that interact with the block layer, but regardless of the virtio transport. > That's why I think virtio-blk should use the similar solution as nvme > (regardless about write_queues and poll_queues) and xen-blkfront. Ok, the hw queues limit from above would be an argument to limit to #vcpus in the virtio-blk driver, regardless of the transport used. (No idea if there are better ways to deal with this, I'm not familiar with the interface.) For virtio devices that don'...
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800 Jason Wang <jasowang at redhat.com> wrote: > Or something like I proposed several years ago? > https://do-db2.lkml.org/lkml/2014/12/25/169 > > Btw, for virtio-net, I think we actually want to go for having a maximum > number of supported queues like what hardware did. This would be useful > for e.g cpu hotplug or XDP (requires per cpu
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800 Jason Wang <jasowang at redhat.com> wrote: > Or something like I proposed several years ago? > https://do-db2.lkml.org/lkml/2014/12/25/169 > > Btw, for virtio-net, I think we actually want to go for having a maximum > number of supported queues like what hardware did. This would be useful > for e.g cpu hotplug or XDP (requires per cpu