Andrew Lunn
2019-Jul-26 13:46 UTC
[Bridge] [PATCH] net: bridge: Allow bridge to joing multicast groups
On Fri, Jul 26, 2019 at 02:02:15PM +0200, Horatiu Vultur wrote:> Hi Nikolay, > > The 07/26/2019 12:26, Nikolay Aleksandrov wrote: > > External E-Mail > > > > > > On 26/07/2019 11:41, Nikolay Aleksandrov wrote: > > > On 25/07/2019 17:21, Horatiu Vultur wrote: > > >> Hi Nikolay, > > >> > > >> The 07/25/2019 16:21, Nikolay Aleksandrov wrote: > > >>> External E-Mail > > >>> > > >>> > > >>> On 25/07/2019 16:06, Nikolay Aleksandrov wrote: > > >>>> On 25/07/2019 14:44, Horatiu Vultur wrote: > > >>>>> There is no way to configure the bridge, to receive only specific link > > >>>>> layer multicast addresses. From the description of the command 'bridge > > >>>>> fdb append' is supposed to do that, but there was no way to notify the > > >>>>> network driver that the bridge joined a group, because LLADDR was added > > >>>>> to the unicast netdev_hw_addr_list. > > >>>>> > > >>>>> Therefore update fdb_add_entry to check if the NLM_F_APPEND flag is set > > >>>>> and if the source is NULL, which represent the bridge itself. Then add > > >>>>> address to multicast netdev_hw_addr_list for each bridge interfaces. > > >>>>> And then the .ndo_set_rx_mode function on the driver is called. To notify > > >>>>> the driver that the list of multicast mac addresses changed. > > >>>>> > > >>>>> Signed-off-by: Horatiu Vultur <horatiu.vultur at microchip.com> > > >>>>> --- > > >>>>> net/bridge/br_fdb.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++--- > > >>>>> 1 file changed, 46 insertions(+), 3 deletions(-) > > >>>>> > > >>>> > > >>>> Hi, > > >>>> I'm sorry but this patch is wrong on many levels, some notes below. In general > > >>>> NLM_F_APPEND is only used in vxlan, the bridge does not handle that flag at all. > > >>>> FDB is only for *unicast*, nothing is joined and no multicast should be used with fdbs. > > >>>> MDB is used for multicast handling, but both of these are used for forwarding. > > >>>> The reason the static fdbs are added to the filter is for non-promisc ports, so they can > > >>>> receive traffic destined for these FDBs for forwarding. > > >>>> If you'd like to join any multicast group please use the standard way, if you'd like to join > > >>>> it only on a specific port - join it only on that port (or ports) and the bridge and you'll > > >>> > > >>> And obviously this is for the case where you're not enabling port promisc mode (non-default). > > >>> In general you'll only need to join the group on the bridge to receive traffic for it > > >>> or add it as an mdb entry to forward it. > > >>> > > >>>> have the effect that you're describing. What do you mean there's no way ? > > >> > > >> Thanks for the explanation. > > >> There are few things that are not 100% clear to me and maybe you can > > >> explain them, not to go totally in the wrong direction. Currently I am > > >> writing a network driver on which I added switchdev support. Then I was > > >> looking for a way to configure the network driver to copy link layer > > >> multicast address to the CPU port. > > >> > > >> If I am using bridge mdb I can do it only for IP multicast addreses, > > >> but how should I do it if I want non IP frames with link layer multicast > > >> address to be copy to CPU? For example: all frames with multicast > > >> address '01-21-6C-00-00-01' to be copy to CPU. What is the user space > > >> command for that? > > >> > > > > > > Check SIOCADDMULTI (ip maddr from iproute2), f.e. add that mac to the port > > > which needs to receive it and the bridge will send it up automatically since > > > it's unknown mcast (note that if there's a querier, you'll have to make the > > > bridge mcast router if it is not the querier itself). It would also flood it to all > > > > Actually you mentioned non-IP traffic, so the querier stuff is not a problem. This > > traffic will always be flooded by the bridge (and also a copy will be locally sent up). > > Thus only the flooding may need to be controlled. > > OK, I see, but the part which is not clear to me is, which bridge > command(from iproute2) to use so the bridge would notify the network > driver(using swichdev or not) to configure the HW to copy all the frames > with dmac '01-21-6C-00-00-01' to CPU? So that the bridge can receive > those frames and then just to pass them up.Hi Horatiu Something to keep in mind. My default, multicast should be flooded, and that includes the CPU port for a DSA driver. Adding an MDB entry allows for optimisations, limiting which ports a multicast frame goes out of. But it is just an optimisation. Andrew
Allan W. Nielsen
2019-Jul-26 19:50 UTC
[Bridge] [PATCH] net: bridge: Allow bridge to joing multicast groups
Hi All, I'm working on the same project as Horatiu. The 07/26/2019 15:46, Andrew Lunn wrote:> My default, multicast should be flooded, and that includes the CPU > port for a DSA driver. Adding an MDB entry allows for optimisations, > limiting which ports a multicast frame goes out of. But it is just an > optimisation.Do you do this for all VLANs, or is there a way to only do this for VLANs that the CPU is suppose to take part of? I assume we could limit the behavioral to only do this for VLANs which the Bridge interface is part of - but I'm not sure if this is what you suggest. As you properly guessed, this model is quite different from what we are used to. Just for the context: The ethernet switches done by Vitesse, which was acquired by Microsemi, and now become Microchip, has until now been supported by a MIT licensed API (running in user-space) and a protocol stack running on top of the API. In this model we have been used to explicitly configure what packets should go to the CPU. Typically this would be the MAC addresses of the interface it self, multicast addresses required by the IP stack (4 and 6), and the broadcast address. In this model, will only do this on VLANs which is configured as L3 interfaces. We may be able to make it work by flood all multicast traffic by default, and use a low priority CPU queue. But I'm having a hard time getting used to this model (maybe time will help). Is it considered required to include the CPU in all multicast flood masks? Or do you know if this is different from driver to driver? Alternative would it make sense to make this behavioral configurable? /Allan
Andrew Lunn
2019-Jul-27 03:02 UTC
[Bridge] [PATCH] net: bridge: Allow bridge to joing multicast groups
> As you properly guessed, this model is quite different from what we are used to.Yes, it takes a while to get the idea that the hardware is just an accelerator for what the Linux stack can already do. And if the switch cannot do some feature, pass the frame to Linux so it can handle it. You need to keep in mind that there could be other ports in the bridge than switch ports, and those ports might be interested in the multicast traffic. Hence the CPU needs to see the traffic. But IGMP snooping can be used to optimise this. But you still need to be careful, eg. IPv6 Neighbour discovery has often been broken on mv88e6xxx because we have been too aggressive with filtering multicast. Andrew