Displaying 20 results from an estimated 68 matches for "num_possible_cpus".
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...t;
> In above case, there is one msix vector per queue.
>
>
> This is because the max number of queues is not limited by the number of
> possible cpus.
>
> By default, nvme (regardless about write_queues and poll_queues) and
> xen-blkfront limit the number of queues with num_possible_cpus().
>
>
> Is this by design on purpose, or can we fix with below?
>
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 4bc083b..df95ce3 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -513,6 +513,8 @@ s...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...-MSI 65540-edge virtio0-req.3
... ...
In above case, there is one msix vector per queue.
This is because the max number of queues is not limited by the number of
possible cpus.
By default, nvme (regardless about write_queues and poll_queues) and
xen-blkfront limit the number of queues with num_possible_cpus().
Is this by design on purpose, or can we fix with below?
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..df95ce3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
if (e...
2019 Mar 12
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...-MSI 65540-edge virtio0-req.3
... ...
In above case, there is one msix vector per queue.
This is because the max number of queues is not limited by the number of
possible cpus.
By default, nvme (regardless about write_queues and poll_queues) and
xen-blkfront limit the number of queues with num_possible_cpus().
Is this by design on purpose, or can we fix with below?
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..df95ce3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
if (e...
2019 Mar 12
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...ector per queue.
Please note that this is pci-specific...
>
>
> This is because the max number of queues is not limited by the number of
> possible cpus.
>
> By default, nvme (regardless about write_queues and poll_queues) and
> xen-blkfront limit the number of queues with num_possible_cpus().
...and these are probably pci-specific as well.
>
>
> Is this by design on purpose, or can we fix with below?
>
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 4bc083b..df95ce3 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/dri...
2019 Mar 13
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...> >>
> >>
> >> This is because the max number of queues is not limited by the number of
> >> possible cpus.
> >>
> >> By default, nvme (regardless about write_queues and poll_queues) and
> >> xen-blkfront limit the number of queues with num_possible_cpus().
> >
> > ...and these are probably pci-specific as well.
>
> Not pci-specific, but per-cpu as well.
Ah, I meant that those are pci devices.
>
> >
> >>
> >>
> >> Is this by design on purpose, or can we fix with below?
> >>...
2019 Mar 15
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...;>>> This is because the max number of queues is not limited by the number of
>>>>> possible cpus.
>>>>>
>>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>> ...and these are probably pci-specific as well.
>>> Not pci-specific, but per-cpu as well.
>> Ah, I meant that those are pci devices.
>>
>>>>
>>>>>
>>>>> Is this by design on purpose, or can we fix with below?
&g...
2019 Mar 18
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On 2019/3/15 ??8:41, Cornelia Huck wrote:
> On Fri, 15 Mar 2019 12:50:11 +0800
> Jason Wang <jasowang at redhat.com> wrote:
>
>> Or something like I proposed several years ago?
>> https://do-db2.lkml.org/lkml/2014/12/25/169
>>
>> Btw, for virtio-net, I think we actually want to go for having a maximum
>> number of supported queues like what hardware
2019 Mar 14
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...rivers/block/virtio_blk.c
> >>>> +++ b/drivers/block/virtio_blk.c
> >>>> @@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
> >>>> if (err)
> >>>> num_vqs = 1;
> >>>>
> >>>> + num_vqs = min(num_possible_cpus(), num_vqs);
> >>>> +
> >>>> vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
> >>>> if (!vblk->vqs)
> >>>> return -ENOMEM;
> >>>
> >>> virtio-blk, however, is not pci-specific...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...that this is pci-specific...
>
>>
>>
>> This is because the max number of queues is not limited by the number of
>> possible cpus.
>>
>> By default, nvme (regardless about write_queues and poll_queues) and
>> xen-blkfront limit the number of queues with num_possible_cpus().
>
> ...and these are probably pci-specific as well.
Not pci-specific, but per-cpu as well.
>
>>
>>
>> Is this by design on purpose, or can we fix with below?
>>
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> in...
2019 Mar 13
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...that this is pci-specific...
>
>>
>>
>> This is because the max number of queues is not limited by the number of
>> possible cpus.
>>
>> By default, nvme (regardless about write_queues and poll_queues) and
>> xen-blkfront limit the number of queues with num_possible_cpus().
>
> ...and these are probably pci-specific as well.
Not pci-specific, but per-cpu as well.
>
>>
>>
>> Is this by design on purpose, or can we fix with below?
>>
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> in...
2019 Mar 20
0
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On 2019/3/19 ??10:22, Dongli Zhang wrote:
> Hi Jason,
>
> On 3/18/19 3:47 PM, Jason Wang wrote:
>> On 2019/3/15 ??8:41, Cornelia Huck wrote:
>>> On Fri, 15 Mar 2019 12:50:11 +0800
>>> Jason Wang <jasowang at redhat.com> wrote:
>>>
>>>> Or something like I proposed several years ago?
>>>>
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
Hi Jason,
On 3/18/19 3:47 PM, Jason Wang wrote:
>
> On 2019/3/15 ??8:41, Cornelia Huck wrote:
>> On Fri, 15 Mar 2019 12:50:11 +0800
>> Jason Wang <jasowang at redhat.com> wrote:
>>
>>> Or something like I proposed several years ago?
>>> https://do-db2.lkml.org/lkml/2014/12/25/169
>>>
>>> Btw, for virtio-net, I think we actually
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
Hi Jason,
On 3/18/19 3:47 PM, Jason Wang wrote:
>
> On 2019/3/15 ??8:41, Cornelia Huck wrote:
>> On Fri, 15 Mar 2019 12:50:11 +0800
>> Jason Wang <jasowang at redhat.com> wrote:
>>
>>> Or something like I proposed several years ago?
>>> https://do-db2.lkml.org/lkml/2014/12/25/169
>>>
>>> Btw, for virtio-net, I think we actually
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800
Jason Wang <jasowang at redhat.com> wrote:
> Or something like I proposed several years ago?
> https://do-db2.lkml.org/lkml/2014/12/25/169
>
> Btw, for virtio-net, I think we actually want to go for having a maximum
> number of supported queues like what hardware did. This would be useful
> for e.g cpu hotplug or XDP (requires per cpu
2019 Mar 15
2
virtio-blk: should num_vqs be limited by num_possible_cpus()?
On Fri, 15 Mar 2019 12:50:11 +0800
Jason Wang <jasowang at redhat.com> wrote:
> Or something like I proposed several years ago?
> https://do-db2.lkml.org/lkml/2014/12/25/169
>
> Btw, for virtio-net, I think we actually want to go for having a maximum
> number of supported queues like what hardware did. This would be useful
> for e.g cpu hotplug or XDP (requires per cpu
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...>>>
>>>> This is because the max number of queues is not limited by the number of
>>>> possible cpus.
>>>>
>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>
>>> ...and these are probably pci-specific as well.
>>
>> Not pci-specific, but per-cpu as well.
>
> Ah, I meant that those are pci devices.
>
>>
>>>
>>>>
>>>>
>>>> Is this by design on purpose...
2019 Mar 14
4
virtio-blk: should num_vqs be limited by num_possible_cpus()?
...>>>
>>>> This is because the max number of queues is not limited by the number of
>>>> possible cpus.
>>>>
>>>> By default, nvme (regardless about write_queues and poll_queues) and
>>>> xen-blkfront limit the number of queues with num_possible_cpus().
>>>
>>> ...and these are probably pci-specific as well.
>>
>> Not pci-specific, but per-cpu as well.
>
> Ah, I meant that those are pci devices.
>
>>
>>>
>>>>
>>>>
>>>> Is this by design on purpose...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+void __init __pv_init_lock_hash(void)
> >>+{
> >>+ int pv_hash_size = 4 * num_possible_cpus();
> >>+
> >>+ if (pv_hash_size< (1U<< LFSR_MIN_BITS))
> >>+ pv_hash_size = (1U<< LFSR_MIN_BITS);
> >>+ /*
> >>+ * Allocate space from bootmem which should be page-size aligned
> >>+ * and hence cacheline aligned.
> >&...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+void __init __pv_init_lock_hash(void)
> >>+{
> >>+ int pv_hash_size = 4 * num_possible_cpus();
> >>+
> >>+ if (pv_hash_size< (1U<< LFSR_MIN_BITS))
> >>+ pv_hash_size = (1U<< LFSR_MIN_BITS);
> >>+ /*
> >>+ * Allocate space from bootmem which should be page-size aligned
> >>+ * and hence cacheline aligned.
> >&...
2019 Jul 03
2
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...,
> + const struct flush_tlb_info *info)
> {
> struct {
> struct mmuext_op op;
> @@ -1366,7 +1366,7 @@ static void xen_flush_tlb_others(const struct cpumask *cpus,
> const size_t mc_entry_size = sizeof(args->op) +
> sizeof(args->mask[0]) * BITS_TO_LONGS(num_possible_cpus());
>
> - trace_xen_mmu_flush_tlb_others(cpus, info->mm, info->start, info->end);
> + trace_xen_mmu_flush_tlb_multi(cpus, info->mm, info->start, info->end);
>
> if (cpumask_empty(cpus))
> return; /* nothing to do */
> @@ -1375,9 +1375,17 @@ stati...