Michael S. Tsirkin
2021-Sep-23 07:04 UTC
[PATCH 1/4] hwrng: virtio - add an internal buffer
On Thu, Sep 23, 2021 at 08:26:06AM +0200, Laurent Vivier wrote:> On 22/09/2021 21:02, Michael S. Tsirkin wrote: > > On Wed, Sep 22, 2021 at 07:09:00PM +0200, Laurent Vivier wrote: > > > hwrng core uses two buffers that can be mixed in the > > > virtio-rng queue. > > > > > > If the buffer is provided with wait=0 it is enqueued in the > > > virtio-rng queue but unused by the caller. > > > On the next call, core provides another buffer but the > > > first one is filled instead and the new one queued. > > > And the caller reads the data from the new one that is not > > > updated, and the data in the first one are lost. > > > > > > To avoid this mix, virtio-rng needs to use its own unique > > > internal buffer at a cost of a data copy to the caller buffer. > > > > > > Signed-off-by: Laurent Vivier <lvivier at redhat.com> > > > --- > > > drivers/char/hw_random/virtio-rng.c | 43 ++++++++++++++++++++++------- > > > 1 file changed, 33 insertions(+), 10 deletions(-) > > > > > > diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c > > > index a90001e02bf7..208c547dcac1 100644 > > > --- a/drivers/char/hw_random/virtio-rng.c > > > +++ b/drivers/char/hw_random/virtio-rng.c > > > @@ -18,13 +18,20 @@ static DEFINE_IDA(rng_index_ida); > > > struct virtrng_info { > > > struct hwrng hwrng; > > > struct virtqueue *vq; > > > - struct completion have_data; > > > char name[25]; > > > - unsigned int data_avail; > > > int index; > > > bool busy; > > > bool hwrng_register_done; > > > bool hwrng_removed; > > > + /* data transfer */ > > > + struct completion have_data; > > > + unsigned int data_avail; > > > + /* minimal size returned by rng_buffer_size() */ > > > +#if SMP_CACHE_BYTES < 32 > > > + u8 data[32]; > > > +#else > > > + u8 data[SMP_CACHE_BYTES]; > > > +#endif > > > > Let's move this logic to a macro in hw_random.h ? > > > > > }; > > > static void random_recv_done(struct virtqueue *vq) > > > @@ -39,14 +46,14 @@ static void random_recv_done(struct virtqueue *vq) > > > } > > > /* The host will fill any buffer we give it with sweet, sweet randomness. */ > > > -static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size) > > > +static void register_buffer(struct virtrng_info *vi) > > > { > > > struct scatterlist sg; > > > - sg_init_one(&sg, buf, size); > > > + sg_init_one(&sg, vi->data, sizeof(vi->data)); > > > > Note that add_early_randomness requests less: > > size_t size = min_t(size_t, 16, rng_buffer_size()); > > > > maybe track how much was requested and grow up to sizeof(data)? > > I think this problem is managed by PATCH 3/4 as we reuse unused data of the buffer.the issue I'm pointing out is that we are requesting too much entropy from host - more than guest needs.> > > > > /* There should always be room for one buffer. */ > > > - virtqueue_add_inbuf(vi->vq, &sg, 1, buf, GFP_KERNEL); > > > + virtqueue_add_inbuf(vi->vq, &sg, 1, vi->data, GFP_KERNEL); > > > > > > BTW no longer true if DMA API is in use ... not easy to fix, > > I think some changes to virtio API to allow pre-mapping > > s/g for DMA might be needed ... > > Is there something I can do here? > > Thanks, > LaurentWe can let it be for now, but down the road I think we should support a way to pre-map memory for DMA. -- MST
On 23/09/2021 09:04, Michael S. Tsirkin wrote:> On Thu, Sep 23, 2021 at 08:26:06AM +0200, Laurent Vivier wrote: >> On 22/09/2021 21:02, Michael S. Tsirkin wrote: >>> On Wed, Sep 22, 2021 at 07:09:00PM +0200, Laurent Vivier wrote: >>>> hwrng core uses two buffers that can be mixed in the >>>> virtio-rng queue. >>>> >>>> If the buffer is provided with wait=0 it is enqueued in the >>>> virtio-rng queue but unused by the caller. >>>> On the next call, core provides another buffer but the >>>> first one is filled instead and the new one queued. >>>> And the caller reads the data from the new one that is not >>>> updated, and the data in the first one are lost. >>>> >>>> To avoid this mix, virtio-rng needs to use its own unique >>>> internal buffer at a cost of a data copy to the caller buffer. >>>> >>>> Signed-off-by: Laurent Vivier <lvivier at redhat.com> >>>> --- >>>> drivers/char/hw_random/virtio-rng.c | 43 ++++++++++++++++++++++------- >>>> 1 file changed, 33 insertions(+), 10 deletions(-) >>>> >>>> diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c >>>> index a90001e02bf7..208c547dcac1 100644 >>>> --- a/drivers/char/hw_random/virtio-rng.c >>>> +++ b/drivers/char/hw_random/virtio-rng.c >>>> @@ -18,13 +18,20 @@ static DEFINE_IDA(rng_index_ida); >>>> struct virtrng_info { >>>> struct hwrng hwrng; >>>> struct virtqueue *vq; >>>> - struct completion have_data; >>>> char name[25]; >>>> - unsigned int data_avail; >>>> int index; >>>> bool busy; >>>> bool hwrng_register_done; >>>> bool hwrng_removed; >>>> + /* data transfer */ >>>> + struct completion have_data; >>>> + unsigned int data_avail; >>>> + /* minimal size returned by rng_buffer_size() */ >>>> +#if SMP_CACHE_BYTES < 32 >>>> + u8 data[32]; >>>> +#else >>>> + u8 data[SMP_CACHE_BYTES]; >>>> +#endif >>> >>> Let's move this logic to a macro in hw_random.h ? >>> >>>> }; >>>> static void random_recv_done(struct virtqueue *vq) >>>> @@ -39,14 +46,14 @@ static void random_recv_done(struct virtqueue *vq) >>>> } >>>> /* The host will fill any buffer we give it with sweet, sweet randomness. */ >>>> -static void register_buffer(struct virtrng_info *vi, u8 *buf, size_t size) >>>> +static void register_buffer(struct virtrng_info *vi) >>>> { >>>> struct scatterlist sg; >>>> - sg_init_one(&sg, buf, size); >>>> + sg_init_one(&sg, vi->data, sizeof(vi->data)); >>> >>> Note that add_early_randomness requests less: >>> size_t size = min_t(size_t, 16, rng_buffer_size()); >>> >>> maybe track how much was requested and grow up to sizeof(data)? >> >> I think this problem is managed by PATCH 3/4 as we reuse unused data of the buffer. > > the issue I'm pointing out is that we are requesting too much > entropy from host - more than guest needs.Yes, guest asks for 16 bytes, but we request SMP_CACHE_BYTES (64 on x86_64), and these 16 bytes are used with add_device_randomness(). With the following patches, the remaining 48 bytes are used rapidly by hwgnd kthread or by the next virtio_read. If there is no enough entropy the call is simply ignored as wait=0. At this patch level the call is always simply ignored (because wait=0) and the data requested here are used by the next read that always asks for a SMP_CACHE_BYTES bytes data size. Moreover in PATCH 4/4 we always have a pending request of size SMP_CACHE_BYTES, so driver always asks a block of this size and the guest takes what it needs. Originally I used a 16 bytes block but performance are divided by 4. Do you propose something else? Thanks, Laurent