On Wed, Oct 27, 2021 at 4:12 AM Gerd Hoffmann <kraxel at redhat.com>
wrote:>
> [ Cc'ing Gurchetan Singh ]
>
> > Can we follow up on this issue?
> >
> > The main pain point with your suggestion is the fact,
> > that it will cause VirGL protocol breakage and we would
> > like to avoid this.
> >
> > Extending execbuffer ioctl and create_resource ioctl is
> > more convenient than having the protocol broken.
>
> Do you know at resource creation time whenever you need cpu access
> or not? IOW can we make that a resource property, so we don't have
> pass around lists of objects on each and every execbuffer ioctl?
Yes. The userspace driver should be able to internally mark the
resource as NO TRANSFER and omit it from execbuffer. It can be tricky
though because resource wait will no longer work.
>
> > Blob resources is not a solution, too, because QEMU doesn't
> > support blob resources right now.
> >
> > In ideal solution when both QEMU and crosvm support blob resources
> > we can switch to blob resources for textures.
>
> That'll only happen if someone works on it.
> I will not be able to do that though.
>
> > I would like to introduce changes gradually and not make a protocol
> > breakage.
>
> Well, opengl switching to blob resources is a protocol change anyway.
> That doesn't imply things will break though. We have capsets etc to
> extend the protocol while maintaining backward compatibility.
>
> > What do you think about that?
>
> I still think that switching to blob resources would be the better
> solution. Yes, it's alot of work and not something which helps
> short-term. But adding a new API as interim solution isn't great
> either. So, not sure. Chia-I Wu? Gurchetan Singh?
I can agree with that, although it depends on how much work needs to
happen in the userspace for virgl.
We will look into a userspace-only solution, or at least something not
involving uAPI additions.
>
>
> While being at it: Chia-I Wu & Gurchetan Singh, could you help
> reviewing virtio-gpu kernel patches? I think you both have a better
> view on the big picture (with both mesa and virglrenderer) than I have.
> Also for the crosvm side of things. The procedure for that would be to
> send a patch adding yourself to the virtio-gpu section of the
> MAINTAINERS file, so scripts/get_maintainer.pl will Cc you on patches
> submitted.
Sure. Will do.>
> thanks,
> Gerd
>
> >
> > Maksym
> >
> >
> > On 10/22/21 10:44 AM, Maksym Wezdecki wrote:
> >
> > > Once again with all lists and receivers. I'm sorry for that.
> > >
> > > On 10/21/21 6:42 PM, Chia-I Wu wrote:
> > >> On Thu, Oct 21, 2021 at 4:52 AM Gerd Hoffmann <kraxel at
redhat.com> wrote:
> > >>> On Thu, Oct 21, 2021 at 11:55:47AM +0200, Maksym Wezdecki
wrote:
> > >>>> I get your point. However, we need to make
resource_create ioctl,
> > >>>> in order to create corresponding resource on the
host.
> > >>> That used to be the case but isn't true any more with
the new
> > >>> blob resources. virglrenderer allows to create gpu
objects
> > >>> via execbuffer. Those gpu objects can be linked to a
> > >>> virtio-gpu resources, but it's optional. In your
case you
> > >>> would do that only for your staging buffer. The other
textures
> > >>> (which you fill with a host-side copy from the staging
buffer)
> > >>> do not need a virtio-gpu resource in the first place.
> > >> That's however a breaking change to the virgl protocol.
All virgl
> > >> commands expect res ids rather than blob ids.
> > >>
> > >> Using VIRTGPU_BLOB_MEM_HOST3D could in theory work. But
there are a
> > >> few occasions where virglrenderer expects there to be guest
storage.
> > >> There are also readbacks that we need to support. We might
end up
> > >> using VIRTGPU_BLOB_MEM_HOST3D_GUEST, where pin-on-demand is
still
> > >> desirable.
> > >>
> > >> For this patch, I think the uapi change can be simplified.
It can be
> > >> a new param plus a new field in drm_virtgpu_execbuffer
> > >>
> > >> __u64 bo_flags; /* pointer to an array of size
num_bo_handles, or NULL */
> > >>
> > >> The other changes do not seem needed.
> > > My first approach was the same, as you mentioned. However, having
"__u64 bo_flags"
> > > needs a for loop, where only few of the bo-s needs to be pinned -
this has
> > > performance implications. I would rather pass bo handles that
should be pinned than
> > > having a lot of flags, where only 1-2 bos needs to be pinned.
> > >
> > >>> take care,
> > >>> Gerd
> > >>>
>
> --
>