Nikolay Aleksandrov
2023-Oct-17 09:24 UTC
[Bridge] [PATCH net-next 09/13] bridge: mcast: Add MDB get support
On 10/16/23 16:12, Ido Schimmel wrote:> Implement support for MDB get operation by looking up a matching MDB > entry, allocating the skb according to the entry's size and then filling > in the response. The operation is performed under the bridge multicast > lock to ensure that the entry does not change between the time the reply > size is determined and when the reply is filled in. > > Signed-off-by: Ido Schimmel <idosch at nvidia.com> > --- > net/bridge/br_device.c | 1 + > net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++ > net/bridge/br_private.h | 9 +++ > 3 files changed, 164 insertions(+) >[snip]> +int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq, > + struct netlink_ext_ack *extack) > +{ > + struct net_bridge *br = netdev_priv(dev); > + struct net_bridge_mdb_entry *mp; > + struct sk_buff *skb; > + struct br_ip group; > + int err; > + > + err = br_mdb_get_parse(dev, tb, &group, extack); > + if (err) > + return err; > + > + spin_lock_bh(&br->multicast_lock);Since this is only reading, could we use rcu to avoid blocking mcast processing?> + > + mp = br_mdb_ip_get(br, &group); > + if (!mp) { > + NL_SET_ERR_MSG_MOD(extack, "MDB entry not found"); > + err = -ENOENT; > + goto unlock; > + } > + > + skb = br_mdb_get_reply_alloc(mp); > + if (!skb) { > + err = -ENOMEM; > + goto unlock; > + } > + > + err = br_mdb_get_reply_fill(skb, mp, portid, seq); > + if (err) { > + NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply"); > + goto free; > + } > + > + spin_unlock_bh(&br->multicast_lock); > + > + return rtnl_unicast(skb, dev_net(dev), portid); > + > +free: > + kfree_skb(skb); > +unlock: > + spin_unlock_bh(&br->multicast_lock); > + return err; > +}
Ido Schimmel
2023-Oct-17 11:03 UTC
[Bridge] [PATCH net-next 09/13] bridge: mcast: Add MDB get support
On Tue, Oct 17, 2023 at 12:24:44PM +0300, Nikolay Aleksandrov wrote:> On 10/16/23 16:12, Ido Schimmel wrote: > > Implement support for MDB get operation by looking up a matching MDB > > entry, allocating the skb according to the entry's size and then filling > > in the response. The operation is performed under the bridge multicast > > lock to ensure that the entry does not change between the time the reply > > size is determined and when the reply is filled in. > > > > Signed-off-by: Ido Schimmel <idosch at nvidia.com> > > --- > > net/bridge/br_device.c | 1 + > > net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++ > > net/bridge/br_private.h | 9 +++ > > 3 files changed, 164 insertions(+) > > > [snip] > > +int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq, > > + struct netlink_ext_ack *extack) > > +{ > > + struct net_bridge *br = netdev_priv(dev); > > + struct net_bridge_mdb_entry *mp; > > + struct sk_buff *skb; > > + struct br_ip group; > > + int err; > > + > > + err = br_mdb_get_parse(dev, tb, &group, extack); > > + if (err) > > + return err; > > + > > + spin_lock_bh(&br->multicast_lock); > > Since this is only reading, could we use rcu to avoid blocking mcast > processing?I tried to explain this choice in the commit message. Do you think it's a non-issue?> > > + > > + mp = br_mdb_ip_get(br, &group); > > + if (!mp) { > > + NL_SET_ERR_MSG_MOD(extack, "MDB entry not found"); > > + err = -ENOENT; > > + goto unlock; > > + } > > + > > + skb = br_mdb_get_reply_alloc(mp); > > + if (!skb) { > > + err = -ENOMEM; > > + goto unlock; > > + } > > + > > + err = br_mdb_get_reply_fill(skb, mp, portid, seq); > > + if (err) { > > + NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply"); > > + goto free; > > + } > > + > > + spin_unlock_bh(&br->multicast_lock); > > + > > + return rtnl_unicast(skb, dev_net(dev), portid); > > + > > +free: > > + kfree_skb(skb); > > +unlock: > > + spin_unlock_bh(&br->multicast_lock); > > + return err; > > +} >