Jiri Pirko
2018-Feb-28 15:11 UTC
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
Wed, Feb 28, 2018 at 03:32:44PM CET, mst at redhat.com wrote:>On Wed, Feb 28, 2018 at 08:08:39AM +0100, Jiri Pirko wrote: >> Tue, Feb 27, 2018 at 10:41:49PM CET, kubakici at wp.pl wrote: >> >On Tue, 27 Feb 2018 13:16:21 -0800, Alexander Duyck wrote: >> >> Basically we need some sort of PCI or PCIe topology mapping for the >> >> devices that can be translated into something we can communicate over >> >> the communication channel. >> > >> >Hm. This is probably a completely stupid idea, but if we need to >> >start marshalling configuration requests/hints maybe the entire problem >> >could be solved by opening a netlink socket from hypervisor? Even make >> >teamd run on the hypervisor side... >> >> Interesting. That would be more trickier then just to fwd 1 genetlink >> socket to the hypervisor. >> >> Also, I think that the solution should handle multiple guest oses. What >> I'm thinking about is some generic bonding description passed over some >> communication channel into vm. The vm either use it for configuration, >> or ignores it if it is not smart enough/updated enough. > >For sure, we could build virtio-bond to pass that info to guests.What do you mean by "virtio-bond". virtio_net extension?> >Such an advisory mechanism would not be a replacement for the mandatory >passthrough fallback flag proposed, but OTOH it's much more flexible. > >-- >MST
Michael S. Tsirkin
2018-Feb-28 15:45 UTC
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
On Wed, Feb 28, 2018 at 04:11:31PM +0100, Jiri Pirko wrote:> Wed, Feb 28, 2018 at 03:32:44PM CET, mst at redhat.com wrote: > >On Wed, Feb 28, 2018 at 08:08:39AM +0100, Jiri Pirko wrote: > >> Tue, Feb 27, 2018 at 10:41:49PM CET, kubakici at wp.pl wrote: > >> >On Tue, 27 Feb 2018 13:16:21 -0800, Alexander Duyck wrote: > >> >> Basically we need some sort of PCI or PCIe topology mapping for the > >> >> devices that can be translated into something we can communicate over > >> >> the communication channel. > >> > > >> >Hm. This is probably a completely stupid idea, but if we need to > >> >start marshalling configuration requests/hints maybe the entire problem > >> >could be solved by opening a netlink socket from hypervisor? Even make > >> >teamd run on the hypervisor side... > >> > >> Interesting. That would be more trickier then just to fwd 1 genetlink > >> socket to the hypervisor. > >> > >> Also, I think that the solution should handle multiple guest oses. What > >> I'm thinking about is some generic bonding description passed over some > >> communication channel into vm. The vm either use it for configuration, > >> or ignores it if it is not smart enough/updated enough. > > > >For sure, we could build virtio-bond to pass that info to guests. > > What do you mean by "virtio-bond". virtio_net extension?I mean a new device supplying topology information to guests, with updates whenever VMs are started, stopped or migrated.> > > >Such an advisory mechanism would not be a replacement for the mandatory > >passthrough fallback flag proposed, but OTOH it's much more flexible. > > > >-- > >MST
Jiri Pirko
2018-Feb-28 19:25 UTC
[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device
Wed, Feb 28, 2018 at 04:45:39PM CET, mst at redhat.com wrote:>On Wed, Feb 28, 2018 at 04:11:31PM +0100, Jiri Pirko wrote: >> Wed, Feb 28, 2018 at 03:32:44PM CET, mst at redhat.com wrote: >> >On Wed, Feb 28, 2018 at 08:08:39AM +0100, Jiri Pirko wrote: >> >> Tue, Feb 27, 2018 at 10:41:49PM CET, kubakici at wp.pl wrote: >> >> >On Tue, 27 Feb 2018 13:16:21 -0800, Alexander Duyck wrote: >> >> >> Basically we need some sort of PCI or PCIe topology mapping for the >> >> >> devices that can be translated into something we can communicate over >> >> >> the communication channel. >> >> > >> >> >Hm. This is probably a completely stupid idea, but if we need to >> >> >start marshalling configuration requests/hints maybe the entire problem >> >> >could be solved by opening a netlink socket from hypervisor? Even make >> >> >teamd run on the hypervisor side... >> >> >> >> Interesting. That would be more trickier then just to fwd 1 genetlink >> >> socket to the hypervisor. >> >> >> >> Also, I think that the solution should handle multiple guest oses. What >> >> I'm thinking about is some generic bonding description passed over some >> >> communication channel into vm. The vm either use it for configuration, >> >> or ignores it if it is not smart enough/updated enough. >> > >> >For sure, we could build virtio-bond to pass that info to guests. >> >> What do you mean by "virtio-bond". virtio_net extension? > >I mean a new device supplying topology information to guests, >with updates whenever VMs are started, stopped or migrated.Good. Any idea how that device would look like? Also, any idea how to handle in in kernel and how to pass along this info to userspace? Is there anything similar out there? Thanks!