Hi Arnaud, On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:> Hi Guennadi, > > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote: > > Hi, > > > > Next update: > > > > v6: > > - rename include/linux/virtio_rpmsg.h -> include/linux/rpmsg/virtio.h > > > > v5: > > - don't hard-code message layout > > > > v4: > > - add endianness conversions to comply with the VirtIO standard > > > > v3: > > - address several checkpatch warnings > > - address comments from Mathieu Poirier > > > > v2: > > - update patch #5 with a correct vhost_dev_init() prototype > > - drop patch #6 - it depends on a different patch, that is currently > > an RFC > > - address comments from Pierre-Louis Bossart: > > * remove "default n" from Kconfig > > > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use > > cases. It can however also be used for virtualisation scenarios, > > e.g. when using KVM to run Linux on both the host and the guests. > > This patch set adds a wrapper API to facilitate writing vhost > > drivers for such RPMsg-based solutions. The first use case is an > > audio DSP virtualisation project, currently under development, ready > > for review and submission, available at > > https://github.com/thesofproject/linux/pull/1501/commits > > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg > service[1] that does not match with your implementation. > As i come late, i hope that i did not miss something in the history... > Don't hesitate to point me the discussions, if it is the case.Well, as you see, this is a v6 only of this patch set, and apart from it there have been several side discussions and patch sets.> Regarding your patchset, it is quite confusing for me. It seems that you > implement your own protocol on top of vhost forked from the RPMsg one. > But look to me that it is not the RPMsg protocol.I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case of remoteproc over VirtIO) or the guest side in case of Linux virtualisation. Since my implementation can talk to that driver, I don't think, that I'm inventing a new protocol. I'm adding support for the same protocol for the opposite side of the VirtIO divide.> So i would be agree with Vincent[2] which proposed to switch on a RPMsg API > and creating a vhost rpmsg device. This is also proposed in the > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3]. > Do you think that this alternative could match with your need?As I replied to Vincent, I understand his proposal and the approach taken in the series [3], but I'm not sure I agree, that adding yet another virtual device / driver layer on the vhost side is a good idea. As far as I understand adding new completely virtual devices isn't considered to be a good practice in the kernel. Currently vhost is just a passive "library" and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of converting vhost to a virtual device infrastructure. Thanks for pointing me out at [3], I should have a better look at it. Thanks Guennadi> [1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html > > Thanks, > Arnaud > > > > > Thanks > > Guennadi > > > > Guennadi Liakhovetski (4): > > vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl > > rpmsg: move common structures and defines to headers > > rpmsg: update documentation > > vhost: add an RPMsg API > > > > Documentation/rpmsg.txt | 6 +- > > drivers/rpmsg/virtio_rpmsg_bus.c | 78 +------ > > drivers/vhost/Kconfig | 7 + > > drivers/vhost/Makefile | 3 + > > drivers/vhost/rpmsg.c | 373 +++++++++++++++++++++++++++++++ > > drivers/vhost/vhost_rpmsg.h | 74 ++++++ > > include/linux/rpmsg/virtio.h | 83 +++++++ > > include/uapi/linux/rpmsg.h | 3 + > > include/uapi/linux/vhost.h | 4 +- > > 9 files changed, 551 insertions(+), 80 deletions(-) > > create mode 100644 drivers/vhost/rpmsg.c > > create mode 100644 drivers/vhost/vhost_rpmsg.h > > create mode 100644 include/linux/rpmsg/virtio.h > >
On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote:> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote: > > So i would be agree with Vincent[2] which proposed to switch on a RPMsg API > > and creating a vhost rpmsg device. This is also proposed in the > > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3]. > > Do you think that this alternative could match with your need? > > As I replied to Vincent, I understand his proposal and the approach taken > in the series [3], but I'm not sure I agree, that adding yet another > virtual device / driver layer on the vhost side is a good idea. As far as > I understand adding new completely virtual devices isn't considered to be > a good practice in the kernel. Currently vhost is just a passive "library" > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of > converting vhost to a virtual device infrastructure.I know it wasn't what you meant, but I noticed that the above paragraph could be read as if my suggestion was to convert vhost to a virtual device infrastructure, so I just want to clarify that that those are not related. The only similarity between what I suggested in the thread in [2] and Kishon's RFC in [3] is that both involve creating a generic vhost-rpmsg driver which would allow the RPMsg API to be used for both sides of the link, instead of introducing a new API just for the server side. That can be done without rewriting drivers/vhost/.> > [1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 > > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
Hi Vincent, On Thu, Sep 17, 2020 at 10:36:44AM +0200, Vincent Whitchurch wrote:> On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote: > > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote: > > > So i would be agree with Vincent[2] which proposed to switch on a RPMsg API > > > and creating a vhost rpmsg device. This is also proposed in the > > > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3]. > > > Do you think that this alternative could match with your need? > > > > As I replied to Vincent, I understand his proposal and the approach taken > > in the series [3], but I'm not sure I agree, that adding yet another > > virtual device / driver layer on the vhost side is a good idea. As far as > > I understand adding new completely virtual devices isn't considered to be > > a good practice in the kernel. Currently vhost is just a passive "library" > > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of > > converting vhost to a virtual device infrastructure. > > I know it wasn't what you meant, but I noticed that the above paragraph > could be read as if my suggestion was to convert vhost to a virtual > device infrastructure, so I just want to clarify that that those are not > related. The only similarity between what I suggested in the thread in > [2] and Kishon's RFC in [3] is that both involve creating a generic > vhost-rpmsg driver which would allow the RPMsg API to be used for both > sides of the link, instead of introducing a new API just for the server > side. That can be done without rewriting drivers/vhost/.Thanks for the clarification. Another flexibility, that I'm trying to preserve with my approach is keeping direct access to iovec style data buffers for cases where that's the structure, that's already used by the respective driver on the host side. Since we already do packing and unpacking on the guest / client side, we don't need the same on the host / server side again. Thanks Guennadi> > > [1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 > > > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html > > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
Hi Guennadi,> -----Original Message----- > From: Guennadi Liakhovetski <guennadi.liakhovetski at linux.intel.com> > Sent: jeudi 17 septembre 2020 07:47 > To: Arnaud POULIQUEN <arnaud.pouliquen at st.com> > Cc: kvm at vger.kernel.org; linux-remoteproc at vger.kernel.org; > virtualization at lists.linux-foundation.org; sound-open-firmware at alsa- > project.org; Pierre-Louis Bossart <pierre-louis.bossart at linux.intel.com>; Liam > Girdwood <liam.r.girdwood at linux.intel.com>; Michael S. Tsirkin > <mst at redhat.com>; Jason Wang <jasowang at redhat.com>; Ohad Ben-Cohen > <ohad at wizery.com>; Bjorn Andersson <bjorn.andersson at linaro.org>; Mathieu > Poirier <mathieu.poirier at linaro.org>; Vincent Whitchurch > <vincent.whitchurch at axis.com> > Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API > > Hi Arnaud, > > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote: > > Hi Guennadi, > > > > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote: > > > Hi, > > > > > > Next update: > > > > > > v6: > > > - rename include/linux/virtio_rpmsg.h -> > > > include/linux/rpmsg/virtio.h > > > > > > v5: > > > - don't hard-code message layout > > > > > > v4: > > > - add endianness conversions to comply with the VirtIO standard > > > > > > v3: > > > - address several checkpatch warnings > > > - address comments from Mathieu Poirier > > > > > > v2: > > > - update patch #5 with a correct vhost_dev_init() prototype > > > - drop patch #6 - it depends on a different patch, that is currently > > > an RFC > > > - address comments from Pierre-Louis Bossart: > > > * remove "default n" from Kconfig > > > > > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use > > > cases. It can however also be used for virtualisation scenarios, > > > e.g. when using KVM to run Linux on both the host and the guests. > > > This patch set adds a wrapper API to facilitate writing vhost > > > drivers for such RPMsg-based solutions. The first use case is an > > > audio DSP virtualisation project, currently under development, ready > > > for review and submission, available at > > > https://github.com/thesofproject/linux/pull/1501/commits > > > > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg > > service[1] that does not match with your implementation. > > As i come late, i hope that i did not miss something in the history... > > Don't hesitate to point me the discussions, if it is the case. > > Well, as you see, this is a v6 only of this patch set, and apart from it there have > been several side discussions and patch sets. > > > Regarding your patchset, it is quite confusing for me. It seems that > > you implement your own protocol on top of vhost forked from the RPMsg > one. > > But look to me that it is not the RPMsg protocol. > > I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially > implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case > of remoteproc over VirtIO) or the guest side in case of Linux virtualisation. > Since my implementation can talk to that driver, I don't think, that I'm inventing > a new protocol. I'm adding support for the same protocol for the opposite side > of the VirtIO divide.The main point I would like to highlight here is related to the use of the name "RPMsg" more than how you implement your IPC protocol. If It is a counterpart, it probably does not respect interface for RPMsg clients. A good way to answer this, might be to respond to this question: Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation? If the response is no, describe it as a RPMsg implementation could lead to confusion... [4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c Regards, Arnaud> > > So i would be agree with Vincent[2] which proposed to switch on a > > RPMsg API and creating a vhost rpmsg device. This is also proposed in > > the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3]. > > Do you think that this alternative could match with your need? > > As I replied to Vincent, I understand his proposal and the approach taken in the > series [3], but I'm not sure I agree, that adding yet another virtual device / > driver layer on the vhost side is a good idea. As far as I understand adding new > completely virtual devices isn't considered to be a good practice in the kernel. > Currently vhost is just a passive "library" > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of > converting vhost to a virtual device infrastructure. > > Thanks for pointing me out at [3], I should have a better look at it. > > Thanks > Guennadi > > > [1]. > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338 > > 335 [2]. > > https://www.spinics.net/lists/linux-virtualization/msg44195.html > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html > > > > Thanks, > > Arnaud > > > > > > > > Thanks > > > Guennadi > > > > > > Guennadi Liakhovetski (4): > > > vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl > > > rpmsg: move common structures and defines to headers > > > rpmsg: update documentation > > > vhost: add an RPMsg API > > > > > > Documentation/rpmsg.txt | 6 +- > > > drivers/rpmsg/virtio_rpmsg_bus.c | 78 +------ > > > drivers/vhost/Kconfig | 7 + > > > drivers/vhost/Makefile | 3 + > > > drivers/vhost/rpmsg.c | 373 +++++++++++++++++++++++++++++++ > > > drivers/vhost/vhost_rpmsg.h | 74 ++++++ > > > include/linux/rpmsg/virtio.h | 83 +++++++ > > > include/uapi/linux/rpmsg.h | 3 + > > > include/uapi/linux/vhost.h | 4 +- > > > 9 files changed, 551 insertions(+), 80 deletions(-) create mode > > > 100644 drivers/vhost/rpmsg.c create mode 100644 > > > drivers/vhost/vhost_rpmsg.h create mode 100644 > > > include/linux/rpmsg/virtio.h > > >
Hi Arnaud, On Thu, Sep 17, 2020 at 05:21:02PM +0200, Arnaud POULIQUEN wrote:> Hi Guennadi, > > > -----Original Message----- > > From: Guennadi Liakhovetski <guennadi.liakhovetski at linux.intel.com> > > Sent: jeudi 17 septembre 2020 07:47 > > To: Arnaud POULIQUEN <arnaud.pouliquen at st.com> > > Cc: kvm at vger.kernel.org; linux-remoteproc at vger.kernel.org; > > virtualization at lists.linux-foundation.org; sound-open-firmware at alsa- > > project.org; Pierre-Louis Bossart <pierre-louis.bossart at linux.intel.com>; Liam > > Girdwood <liam.r.girdwood at linux.intel.com>; Michael S. Tsirkin > > <mst at redhat.com>; Jason Wang <jasowang at redhat.com>; Ohad Ben-Cohen > > <ohad at wizery.com>; Bjorn Andersson <bjorn.andersson at linaro.org>; Mathieu > > Poirier <mathieu.poirier at linaro.org>; Vincent Whitchurch > > <vincent.whitchurch at axis.com> > > Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API > > > > Hi Arnaud, > > > > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote: > > > Hi Guennadi, > > > > > > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote: > > > > Hi, > > > > > > > > Next update: > > > > > > > > v6: > > > > - rename include/linux/virtio_rpmsg.h -> > > > > include/linux/rpmsg/virtio.h > > > > > > > > v5: > > > > - don't hard-code message layout > > > > > > > > v4: > > > > - add endianness conversions to comply with the VirtIO standard > > > > > > > > v3: > > > > - address several checkpatch warnings > > > > - address comments from Mathieu Poirier > > > > > > > > v2: > > > > - update patch #5 with a correct vhost_dev_init() prototype > > > > - drop patch #6 - it depends on a different patch, that is currently > > > > an RFC > > > > - address comments from Pierre-Louis Bossart: > > > > * remove "default n" from Kconfig > > > > > > > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use > > > > cases. It can however also be used for virtualisation scenarios, > > > > e.g. when using KVM to run Linux on both the host and the guests. > > > > This patch set adds a wrapper API to facilitate writing vhost > > > > drivers for such RPMsg-based solutions. The first use case is an > > > > audio DSP virtualisation project, currently under development, ready > > > > for review and submission, available at > > > > https://github.com/thesofproject/linux/pull/1501/commits > > > > > > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg > > > service[1] that does not match with your implementation. > > > As i come late, i hope that i did not miss something in the history... > > > Don't hesitate to point me the discussions, if it is the case. > > > > Well, as you see, this is a v6 only of this patch set, and apart from it there have > > been several side discussions and patch sets. > > > > > Regarding your patchset, it is quite confusing for me. It seems that > > > you implement your own protocol on top of vhost forked from the RPMsg > > one. > > > But look to me that it is not the RPMsg protocol. > > > > I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially > > implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case > > of remoteproc over VirtIO) or the guest side in case of Linux virtualisation. > > Since my implementation can talk to that driver, I don't think, that I'm inventing > > a new protocol. I'm adding support for the same protocol for the opposite side > > of the VirtIO divide. > > The main point I would like to highlight here is related to the use of the name "RPMsg" > more than how you implement your IPC protocol. > If It is a counterpart, it probably does not respect interface for RPMsg clients. > A good way to answer this, might be to respond to this question: > Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation? > If the response is no, describe it as a RPMsg implementation could lead to confusion...Sorry, I don't quite understand your logic. RPMsg is a communication protocol, not an API. An RPMsg implementation has to be able to communicate with other compliant RPMsg implementations, it doesn't have to provide any specific API. Am I missing anything? Thanks Guennadi> [4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c > > Regards, > Arnaud > > > > > > So i would be agree with Vincent[2] which proposed to switch on a > > > RPMsg API and creating a vhost rpmsg device. This is also proposed in > > > the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3]. > > > Do you think that this alternative could match with your need? > > > > As I replied to Vincent, I understand his proposal and the approach taken in the > > series [3], but I'm not sure I agree, that adding yet another virtual device / > > driver layer on the vhost side is a good idea. As far as I understand adding new > > completely virtual devices isn't considered to be a good practice in the kernel. > > Currently vhost is just a passive "library" > > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of > > converting vhost to a virtual device infrastructure. > > > > Thanks for pointing me out at [3], I should have a better look at it. > > > > Thanks > > Guennadi > > > > > [1]. > > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338 > > > 335 [2]. > > > https://www.spinics.net/lists/linux-virtualization/msg44195.html > > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html > > > > > > Thanks, > > > Arnaud > > > > > > > > > > > Thanks > > > > Guennadi > > > > > > > > Guennadi Liakhovetski (4): > > > > vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl > > > > rpmsg: move common structures and defines to headers > > > > rpmsg: update documentation > > > > vhost: add an RPMsg API > > > > > > > > Documentation/rpmsg.txt | 6 +- > > > > drivers/rpmsg/virtio_rpmsg_bus.c | 78 +------ > > > > drivers/vhost/Kconfig | 7 + > > > > drivers/vhost/Makefile | 3 + > > > > drivers/vhost/rpmsg.c | 373 +++++++++++++++++++++++++++++++ > > > > drivers/vhost/vhost_rpmsg.h | 74 ++++++ > > > > include/linux/rpmsg/virtio.h | 83 +++++++ > > > > include/uapi/linux/rpmsg.h | 3 + > > > > include/uapi/linux/vhost.h | 4 +- > > > > 9 files changed, 551 insertions(+), 80 deletions(-) create mode > > > > 100644 drivers/vhost/rpmsg.c create mode 100644 > > > > drivers/vhost/vhost_rpmsg.h create mode 100644 > > > > include/linux/rpmsg/virtio.h > > > >