Displaying 18 results from an estimated 18 matches for "poll_queue".
Did you mean:
poll_queues
2008 Mar 18
1
Polling is REALLY slow
...but it doesn''t work as expected. Here is what I have with error
checking, etc. removed:
class RequestQueuePollerWorker < BackgrounDRb::MetaWorker
set_worker_name :request_queue_poller_worker
QUEUE_SLEEP_TIME = 30 # seconds
def create(args = nil)
@running = true
self.poll_queue
end
def build_all_matches(args = nil)
thread_pool.defer(args) do |args|
requests = Request.find_active(:all)
requests.each { |request| request.queue! } # using
acts_as_state_machine
end
end
protected
# Was hoping to get multiple threads processing
def build_mat...
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...h set is we would be able to enable more queues when
there is limited number of vectors.
Another use case we may classify queues as hight priority or low priority as
mentioned by Cornelia.
For virtio-blk, we may extend virtio-blk based on this patch set to enable
something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1).
Yet, the question I am asking in this email thread is for a difference scenario.
The issue is not we are not having enough vectors (although this is why only 1
vector is allocated for all virtio-blk queues). As so far virtio-blk has
(set->nr_maps == 1),...
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...h set is we would be able to enable more queues when
there is limited number of vectors.
Another use case we may classify queues as hight priority or low priority as
mentioned by Cornelia.
For virtio-blk, we may extend virtio-blk based on this patch set to enable
something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1).
Yet, the question I am asking in this email thread is for a difference scenario.
The issue is not we are not having enough vectors (although this is why only 1
vector is allocated for all virtio-blk queues). As so far virtio-blk has
(set->nr_maps == 1),...
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...Please note that this is pci-specific...
>>>
>>>>
>>>>
>>>> This is because the max number of queues is not limited by the number of
>>>> possible cpus.
>>>>
>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>
>>> ...and these are probably pci-specific as well.
>>
>> Not pci-specific, but per-cpu as well.
>
> Ah, I meant that those are pci devices.
>
>>
>>&...
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...Please note that this is pci-specific...
>>>
>>>>
>>>>
>>>> This is because the max number of queues is not limited by the number of
>>>> possible cpus.
>>>>
>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>
>>> ...and these are probably pci-specific as well.
>>
>> Not pci-specific, but per-cpu as well.
>
> Ah, I meant that those are pci devices.
>
>>
>>&...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...0-req.2
28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3
... ...
In above case, there is one msix vector per queue.
This is because the max number of queues is not limited by the number of
possible cpus.
By default, nvme (regardless about write_queues and poll_queues) and
xen-blkfront limit the number of queues with num_possible_cpus().
Is this by design on purpose, or can we fix with below?
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..df95ce3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...0-req.2
28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3
... ...
In above case, there is one msix vector per queue.
This is because the max number of queues is not limited by the number of
possible cpus.
By default, nvme (regardless about write_queues and poll_queues) and
xen-blkfront limit the number of queues with num_possible_cpus().
Is this by design on purpose, or can we fix with below?
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..df95ce3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...above case, there is one msix vector per queue.
>
> Please note that this is pci-specific...
>
>>
>>
>> This is because the max number of queues is not limited by the number of
>> possible cpus.
>>
>> By default, nvme (regardless about write_queues and poll_queues) and
>> xen-blkfront limit the number of queues with num_possible_cpus().
>
> ...and these are probably pci-specific as well.
Not pci-specific, but per-cpu as well.
>
>>
>>
>> Is this by design on purpose, or can we fix with below?
>>
>>
>> di...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...above case, there is one msix vector per queue.
>
> Please note that this is pci-specific...
>
>>
>>
>> This is because the max number of queues is not limited by the number of
>> possible cpus.
>>
>> By default, nvme (regardless about write_queues and poll_queues) and
>> xen-blkfront limit the number of queues with num_possible_cpus().
>
> ...and these are probably pci-specific as well.
Not pci-specific, but per-cpu as well.
>
>>
>>
>> Is this by design on purpose, or can we fix with below?
>>
>>
>> di...
2019 Mar 15
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...note that this is pci-specific...
>>>>
>>>>>
>>>>> This is because the max number of queues is not limited by the number of
>>>>> possible cpus.
>>>>>
>>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>> ...and these are probably pci-specific as well.
>>> Not pci-specific, but per-cpu as well.
>> Ah, I meant that those are pci devices.
>>
>>>>
>>&g...
2019 Mar 12
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...o0-req.3
> ... ...
>
> In above case, there is one msix vector per queue.
Please note that this is pci-specific...
>
>
> This is because the max number of queues is not limited by the number of
> possible cpus.
>
> By default, nvme (regardless about write_queues and poll_queues) and
> xen-blkfront limit the number of queues with num_possible_cpus().
...and these are probably pci-specific as well.
>
>
> Is this by design on purpose, or can we fix with below?
>
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 4b...
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...0 PCI-MSI 65540-edge virtio0-req.3
> ... ...
>
> In above case, there is one msix vector per queue.
>
>
> This is because the max number of queues is not limited by the number of
> possible cpus.
>
> By default, nvme (regardless about write_queues and poll_queues) and
> xen-blkfront limit the number of queues with num_possible_cpus().
>
>
> Is this by design on purpose, or can we fix with below?
>
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 4bc083b..df95ce3 100644
> --- a/drivers/block/vir...
2019 Mar 20
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...le more queues when
> there is limited number of vectors.
>
> Another use case we may classify queues as hight priority or low priority as
> mentioned by Cornelia.
>
> For virtio-blk, we may extend virtio-blk based on this patch set to enable
> something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1).
>
>
> Yet, the question I am asking in this email thread is for a difference scenario.
>
> The issue is not we are not having enough vectors (although this is why only 1
> vector is allocated for all virtio-blk queues). As so far virtio-blk h...
2019 Mar 13
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >
> > Please note that this is pci-specific...
> >
> >>
> >>
> >> This is because the max number of queues is not limited by the number of
> >> possible cpus.
> >>
> >> By default, nvme (regardless about write_queues and poll_queues) and
> >> xen-blkfront limit the number of queues with num_possible_cpus().
> >
> > ...and these are probably pci-specific as well.
>
> Not pci-specific, but per-cpu as well.
Ah, I meant that those are pci devices.
>
> >
> >>
> >>
&...
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...ck layer limits the number of hw queues to the number of
vcpus. This applies only to virtio devices that interact with the
block layer, but regardless of the virtio transport.
> That's why I think virtio-blk should use the similar solution as nvme
> (regardless about write_queues and poll_queues) and xen-blkfront.
Ok, the hw queues limit from above would be an argument to limit to
#vcpus in the virtio-blk driver, regardless of the transport used. (No
idea if there are better ways to deal with this, I'm not familiar with
the interface.)
For virtio devices that don't interact with...
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800
Jason Wang <jasowang at redhat.com> wrote:
> Or something like I proposed several years ago?
> https://do-db2.lkml.org/lkml/2014/12/25/169
>
> Btw, for virtio-net, I think we actually want to go for having a maximum
> number of supported queues like what hardware did. This would be useful
> for e.g cpu hotplug or XDP (requires per cpu
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800
Jason Wang <jasowang at redhat.com> wrote:
> Or something like I proposed several years ago?
> https://do-db2.lkml.org/lkml/2014/12/25/169
>
> Btw, for virtio-net, I think we actually want to go for having a maximum
> number of supported queues like what hardware did. This would be useful
> for e.g cpu hotplug or XDP (requires per cpu
2018 Aug 03
1
[PATCH net-next v7 3/4] net: vhost: factor out busy polling logic to vhost_net_busy_poll()
...t the
>>>>> handle_rx is scheduled ?
>>>>> If we use the vhost_has_work(), the work in the dev work_list may be
>>>>> rx work, or tx work, right ?
>>>> Yes. We can add a boolean to record whether or not we've called
>>>> vhost_poll_queue() for rvq. And avoid calling vhost_net_enable_vq() if
>>>> it was true.
>>> so, the commit be294a51a "vhost_net: Avoid rx queue wake-ups during busypoll"
>>> may not consider the case: work is tx work in the dev work list.
>> So two kinds of work, tx ki...