Displaying 3 results from an estimated 3 matches for "affd".
Did you mean:
afd
2023 Jan 27
0
[PATCH v2 01/11] genirq/affinity:: Export irq_create_affinity_masks()
...ff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> > > index d9a5c1d65a79..f074a7707c6d 100644
> > > --- a/kernel/irq/affinity.c
> > > +++ b/kernel/irq/affinity.c
> > > @@ -487,6 +487,7 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
> > >
> > > return masks;
> > > }
> > > +EXPORT_SYMBOL_GPL(irq_create_affinity_masks);
> > >
> > > /**
> > > * irq_calc_affinity_vectors - Calculate the optimal number of vectors
> > > --
> > > 2.20.1
> &...
2023 Mar 28
0
[PATCH v4 03/11] virtio-vdpa: Support interrupt affinity spreading mechanism
.... So it should pass a full set needed by the automatic irq
> affinity assignment instead of a subset. Then virtio-vdpa can choose
> to pass a queue to cpu mapping to VDUSE, which is what we do now (use
> set_vq_affinity()).
Yes, so basically two ways:
1) automatic IRQ management, passing affd to find_vqs(), affinity was
determined by the transport (e.g vDPA).
2) affinity that is under the control of the driver, it needs to use
set_vq_affinity() but need to deal with cpu hotplug stuffs.
Thanks
>
> Thanks,
> Yongji
>
2005 Feb 02
0
two small-ish optimizations (death by a thousand cuts)
This lpc_restore_order was partially inspired by Miroslav's affd, though
my (not very great) ARM asm version resembled this, as well.
The other two reduce CPU array indexing overhead in loops a little.
Additionally, a request for help:
My not very optimized lpc_restore_signal is at the below URL, I
couldn't get the ldm* instructions to work as advertise...