Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 00/16] net: bridge: mcast: IGMPv3/MLDv2 fast-path (part 2)
From: Nikolay Aleksandrov <nikolay at nvidia.com> Hi, This is the second part of the IGMPv3/MLDv2 support which adds support for the fast-path. In order to be able to handle source entries we add mdb support for S,G entries (i.e. we add source address support to br_ip), that requires to extend the current mdb netlink API, fortunately we just add another attribute which will contain nested future mdb attributes, then we use it to add support for S,G user- add, del and dump. The lookup sequence is simple: when IGMPv3/MLDv2 are enabled do the S,G lookup first and if it fails fallback to *,G. The more complex part is when we begin handling source lists and auto-installing S,G entries and *,G filter mode transitions. We have the following cases: 1) *,G INCLUDE -> EXCLUDE transition: we need to install the port in all of *,G's installed S,G entries for proper replication (except the ones explicitly blocked), this is also necessary when adding a new *,G EXCLUDE port group 2) *,G EXCLUDE -> INCLUDE transition: we need to remove the port from all of *,G's installed S,G entries, this is also necessary when removing a *,G port group 3) New S,G port entry: we need to install all current *,G EXCLUDE ports 4) Remove S,G port entry: if all other port groups were auto-installed we can safely remove them and delete the whole S,G entry Currently we compute these operations from the available ports, their source lists and their filter mode. In the future we can extend the port group structure and reduce the running time of these ops. Also one current limitation is that host-joined S,G entries are not supported. I.e. one cannot add "dev bridge port bridge" mdb S,G entries. The host join is currently considered an EXCLUDE {} join, so it's reflected in all of *,G's installed S,G entries. If an S,G,port entry is added as temporary then the kernel can take it over if a source shows up from a report, permanent entries are skipped. In order to properly handle blocked sources we add a new port group blocked flag to avoid forwarding to that port group in the S,G. Finally when forwarding we use the port group filter mode (if it's INCLUDE and the port group is from a *,G then don't replicate to it, respectively if it's EXCLUDE then forward) and the blocked flag (obviously if it's set - skip that port unless it's a router port) to decide if the port should be skipped. Another limitation is that we can't do some of the above transitions without small traffic drop while installing/removing entries. That will be taken care of when we add atomic swap of port replication lists later. Patch break down: patches 1-3: prepare the mdb code for better extack support which is used in future patches to return a more meaningful error patches 4-6: add the source address field to struct br_ip, and do minor cleanups around it patches 7-8: extend the mdb netlink API so we can send new mdb attributes and uses the new API for S,G entry add/del/dump support patch 9: takes care of S,G entries when doing a lookup (first S,G then *,G lookup) patch 10: adds a new port group field and attribute for origin protocol we use the already available RTPROT_ definitions, currently user-space entries are added as RTPROT_STATIC and kernel entries are added as RTPROT_KERNEL, we may allow user-space to set custom values later (e.g. for FRR, clag) patch 11: adds an internal S,G,port rhashtable to speed up filter mode transitions patch 12: initial automatic install of S,G entries based on port groups' source lists patch 13: handles port group modes on transitions or when new port group entries are added patch 14: self-explanatory - adds support for blocked port group entries needed to stop forwarding to particular S,G,port entries patch 15: handles host-join/leave state changes, treats host-joins as EXCLUDE {} groups (reflected in all *,G's S,G entries) patch 16: finally adds the fast-path filter mode and block flag support Here're the sets that will come next (in order): - iproute2 support for IGMPv3/MLDv2 - selftests for all mode transitions and group flags - explicit host tracking for proper fast-leave support - atomic port replication lists (these are also needed for broadcast forwarding optimizations) - mode transition optimization and removal of open-coded sorted lists Not implemented yet: - Host IGMPv3/MLDv2 filter support (currently we handle only join/leave as before) - Proper other querier source timer and value updates - IGMPv3/v2 MLDv2/v1 compat (I have a few rough patches for this one) v2: fix build with CONFIG_BATMAN_ADV_MCAST in patch 6 Thanks, Nik Nikolay Aleksandrov (16): net: bridge: mdb: use extack in br_mdb_parse() net: bridge: mdb: move all port and bridge checks to br_mdb_add net: bridge: mdb: use extack in br_mdb_add() and br_mdb_add_group() net: bridge: add src field to br_ip net: bridge: mcast: use br_ip's src for src groups and querier address net: bridge: mcast: rename br_ip's u member to dst net: bridge: mdb: add support to extend add/del commands net: bridge: mdb: add support for add/del/dump of entries with source net: bridge: mcast: when igmpv3/mldv2 are enabled lookup (S,G) first, then (*,G) net: bridge: mcast: add rt_protocol field to the port group struct net: bridge: mcast: add sg_port rhashtable net: bridge: mcast: install S,G entries automatically based on reports net: bridge: mcast: handle port group filter modes net: bridge: mcast: add support for blocked port groups net: bridge: mcast: handle host state net: bridge: mcast: when forwarding handle filter mode and blocked flag include/linux/if_bridge.h | 8 +- include/uapi/linux/if_bridge.h | 17 + net/batman-adv/multicast.c | 14 +- net/bridge/br_forward.c | 17 +- net/bridge/br_mdb.c | 371 +++++++++++++----- net/bridge/br_multicast.c | 678 +++++++++++++++++++++++++++------ net/bridge/br_private.h | 49 ++- 7 files changed, 916 insertions(+), 238 deletions(-) -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 01/16] net: bridge: mdb: use extack in br_mdb_parse()
From: Nikolay Aleksandrov <nikolay at nvidia.com> We can drop the pr_info() calls and just use extack to return a meaningful error to user-space when br_mdb_parse() fails. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_mdb.c | 60 +++++++++++++++++++++++++++++---------------- 1 file changed, 39 insertions(+), 21 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 00f1651a6aba..d4031f5554f7 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -629,33 +629,50 @@ void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, rtnl_set_sk_err(net, RTNLGRP_MDB, err); } -static bool is_valid_mdb_entry(struct br_mdb_entry *entry) +static bool is_valid_mdb_entry(struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { - if (entry->ifindex == 0) + if (entry->ifindex == 0) { + NL_SET_ERR_MSG_MOD(extack, "Zero entry ifindex is not allowed"); return false; + } if (entry->addr.proto == htons(ETH_P_IP)) { - if (!ipv4_is_multicast(entry->addr.u.ip4)) + if (!ipv4_is_multicast(entry->addr.u.ip4)) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is not multicast"); return false; - if (ipv4_is_local_multicast(entry->addr.u.ip4)) + } + if (ipv4_is_local_multicast(entry->addr.u.ip4)) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is local multicast"); return false; + } #if IS_ENABLED(CONFIG_IPV6) } else if (entry->addr.proto == htons(ETH_P_IPV6)) { - if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) + if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) { + NL_SET_ERR_MSG_MOD(extack, "IPv6 entry group address is link-local all nodes"); return false; + } #endif - } else + } else { + NL_SET_ERR_MSG_MOD(extack, "Unknown entry protocol"); return false; - if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) + } + + if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) { + NL_SET_ERR_MSG_MOD(extack, "Unknown entry state"); return false; - if (entry->vid >= VLAN_VID_MASK) + } + if (entry->vid >= VLAN_VID_MASK) { + NL_SET_ERR_MSG_MOD(extack, "Invalid entry VLAN id"); return false; + } return true; } static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, - struct net_device **pdev, struct br_mdb_entry **pentry) + struct net_device **pdev, struct br_mdb_entry **pentry, + struct netlink_ext_ack *extack) { struct net *net = sock_net(skb->sk); struct br_mdb_entry *entry; @@ -671,36 +688,37 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, bpm = nlmsg_data(nlh); if (bpm->ifindex == 0) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid ifindex\n"); + NL_SET_ERR_MSG_MOD(extack, "Invalid bridge ifindex"); return -EINVAL; } dev = __dev_get_by_index(net, bpm->ifindex); if (dev == NULL) { - pr_info("PF_BRIDGE: br_mdb_parse() with unknown ifindex\n"); + NL_SET_ERR_MSG_MOD(extack, "Bridge device doesn't exist"); return -ENODEV; } if (!(dev->priv_flags & IFF_EBRIDGE)) { - pr_info("PF_BRIDGE: br_mdb_parse() with non-bridge\n"); + NL_SET_ERR_MSG_MOD(extack, "Device is not a bridge"); return -EOPNOTSUPP; } *pdev = dev; - if (!tb[MDBA_SET_ENTRY] || - nla_len(tb[MDBA_SET_ENTRY]) != sizeof(struct br_mdb_entry)) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid attr\n"); + if (!tb[MDBA_SET_ENTRY]) { + NL_SET_ERR_MSG_MOD(extack, "Missing MDBA_SET_ENTRY attribute"); return -EINVAL; } - - entry = nla_data(tb[MDBA_SET_ENTRY]); - if (!is_valid_mdb_entry(entry)) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid entry\n"); + if (nla_len(tb[MDBA_SET_ENTRY]) != sizeof(struct br_mdb_entry)) { + NL_SET_ERR_MSG_MOD(extack, "Invalid MDBA_SET_ENTRY attribute length"); return -EINVAL; } + entry = nla_data(tb[MDBA_SET_ENTRY]); + if (!is_valid_mdb_entry(entry, extack)) + return -EINVAL; *pentry = entry; + return 0; } @@ -797,7 +815,7 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry); + err = br_mdb_parse(skb, nlh, &dev, &entry, extack); if (err < 0) return err; @@ -892,7 +910,7 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry); + err = br_mdb_parse(skb, nlh, &dev, &entry, extack); if (err < 0) return err; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 02/16] net: bridge: mdb: move all port and bridge checks to br_mdb_add
From: Nikolay Aleksandrov <nikolay at nvidia.com> To avoid doing duplicate device checks and searches (the same were done in br_mdb_add and __br_mdb_add) pass the already found port to __br_mdb_add and pull the bridge's netif_running and enabled multicast checks to br_mdb_add. This would also simplify the future extack errors. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_mdb.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index d4031f5554f7..92ab7369fee0 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -775,31 +775,18 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, } static int __br_mdb_add(struct net *net, struct net_bridge *br, + struct net_bridge_port *p, struct br_mdb_entry *entry) { struct br_ip ip; - struct net_device *dev; - struct net_bridge_port *p = NULL; int ret; - if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) - return -EINVAL; - - if (entry->ifindex != br->dev->ifindex) { - dev = __dev_get_by_index(net, entry->ifindex); - if (!dev) - return -ENODEV; - - p = br_port_get_rtnl(dev); - if (!p || p->br != br || p->state == BR_STATE_DISABLED) - return -EINVAL; - } - __mdb_entry_to_br_ip(entry, &ip); spin_lock_bh(&br->multicast_lock); ret = br_mdb_add_group(br, p, &ip, entry); spin_unlock_bh(&br->multicast_lock); + return ret; } @@ -821,6 +808,9 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, br = netdev_priv(dev); + if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + return -EINVAL; + if (entry->ifindex != br->dev->ifindex) { pdev = __dev_get_by_index(net, entry->ifindex); if (!pdev) @@ -840,12 +830,12 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, if (br_vlan_enabled(br->dev) && vg && entry->vid == 0) { list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; - err = __br_mdb_add(net, br, entry); + err = __br_mdb_add(net, br, p, entry); if (err) break; } } else { - err = __br_mdb_add(net, br, entry); + err = __br_mdb_add(net, br, p, entry); } return err; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 03/16] net: bridge: mdb: use extack in br_mdb_add() and br_mdb_add_group()
From: Nikolay Aleksandrov <nikolay at nvidia.com> Pass and use extack all the way down to br_mdb_add_group(). Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_mdb.c | 54 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 42 insertions(+), 12 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 92ab7369fee0..1df62d887953 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -723,7 +723,8 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, } static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, - struct br_ip *group, struct br_mdb_entry *entry) + struct br_ip *group, struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; @@ -742,10 +743,14 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, /* host join */ if (!port) { /* don't allow any flags for host-joined groups */ - if (entry->state) + if (entry->state) { + NL_SET_ERR_MSG_MOD(extack, "Flags are not allowed for host groups"); return -EINVAL; - if (mp->host_joined) + } + if (mp->host_joined) { + NL_SET_ERR_MSG_MOD(extack, "Group is already joined by host"); return -EEXIST; + } br_multicast_host_join(mp, false); br_mdb_notify(br->dev, mp, NULL, RTM_NEWMDB); @@ -756,16 +761,20 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port == port) + if (p->port == port) { + NL_SET_ERR_MSG_MOD(extack, "Group is already joined by port"); return -EEXIST; + } if ((unsigned long)p->port < (unsigned long)port) break; } p = br_multicast_new_port_group(port, group, *pp, entry->state, NULL, MCAST_EXCLUDE); - if (unlikely(!p)) + if (unlikely(!p)) { + NL_SET_ERR_MSG_MOD(extack, "Couldn't allocate new port group"); return -ENOMEM; + } rcu_assign_pointer(*pp, p); if (entry->state == MDB_TEMPORARY) mod_timer(&p->timer, now + br->multicast_membership_interval); @@ -776,7 +785,8 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, static int __br_mdb_add(struct net *net, struct net_bridge *br, struct net_bridge_port *p, - struct br_mdb_entry *entry) + struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { struct br_ip ip; int ret; @@ -784,7 +794,7 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, __mdb_entry_to_br_ip(entry, &ip); spin_lock_bh(&br->multicast_lock); - ret = br_mdb_add_group(br, p, &ip, entry); + ret = br_mdb_add_group(br, p, &ip, entry, extack); spin_unlock_bh(&br->multicast_lock); return ret; @@ -808,17 +818,37 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, br = netdev_priv(dev); - if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!netif_running(br->dev)) { + NL_SET_ERR_MSG_MOD(extack, "Bridge device is not running"); return -EINVAL; + } + + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) { + NL_SET_ERR_MSG_MOD(extack, "Bridge's multicast processing is disabled"); + return -EINVAL; + } if (entry->ifindex != br->dev->ifindex) { pdev = __dev_get_by_index(net, entry->ifindex); - if (!pdev) + if (!pdev) { + NL_SET_ERR_MSG_MOD(extack, "Port net device doesn't exist"); return -ENODEV; + } p = br_port_get_rtnl(pdev); - if (!p || p->br != br || p->state == BR_STATE_DISABLED) + if (!p) { + NL_SET_ERR_MSG_MOD(extack, "Net device is not a bridge port"); return -EINVAL; + } + + if (p->br != br) { + NL_SET_ERR_MSG_MOD(extack, "Port belongs to a different bridge device"); + return -EINVAL; + } + if (p->state == BR_STATE_DISABLED) { + NL_SET_ERR_MSG_MOD(extack, "Port is in disabled state"); + return -EINVAL; + } vg = nbp_vlan_group(p); } else { vg = br_vlan_group(br); @@ -830,12 +860,12 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, if (br_vlan_enabled(br->dev) && vg && entry->vid == 0) { list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; - err = __br_mdb_add(net, br, p, entry); + err = __br_mdb_add(net, br, p, entry, extack); if (err) break; } } else { - err = __br_mdb_add(net, br, p, entry); + err = __br_mdb_add(net, br, p, entry, extack); } return err; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 04/16] net: bridge: add src field to br_ip
From: Nikolay Aleksandrov <nikolay at nvidia.com> Add a new src field to struct br_ip which will be used to lookup S, G entries. When SSM option is added we will enable full br_ip lookups. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/linux/if_bridge.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index 6479a38e52fa..4fb9c4954f3a 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -18,6 +18,12 @@ struct br_ip { __be32 ip4; #if IS_ENABLED(CONFIG_IPV6) struct in6_addr ip6; +#endif + } src; + union { + __be32 ip4; +#if IS_ENABLED(CONFIG_IPV6) + struct in6_addr ip6; #endif } u; __be16 proto; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 05/16] net: bridge: mcast: use br_ip's src for src groups and querier address
From: Nikolay Aleksandrov <nikolay at nvidia.com> Now that we have src and dst in br_ip it is logical to use the src field for the cases where we need to work with a source address such as querier source address and group source address. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_mdb.c | 4 +-- net/bridge/br_multicast.c | 56 +++++++++++++++++++-------------------- 2 files changed, 30 insertions(+), 30 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 1df62d887953..269ffd2e549b 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -98,7 +98,7 @@ static int __mdb_fill_srcs(struct sk_buff *skb, switch (ent->addr.proto) { case htons(ETH_P_IP): if (nla_put_in_addr(skb, MDBA_MDB_SRCATTR_ADDRESS, - ent->addr.u.ip4)) { + ent->addr.src.ip4)) { nla_nest_cancel(skb, nest_ent); goto out_cancel_err; } @@ -106,7 +106,7 @@ static int __mdb_fill_srcs(struct sk_buff *skb, #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): if (nla_put_in6_addr(skb, MDBA_MDB_SRCATTR_ADDRESS, - &ent->addr.u.ip6)) { + &ent->addr.src.ip6)) { nla_nest_cancel(skb, nest_ent); goto out_cancel_err; } diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index e77f1e27caf7..a899c22c8f57 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -423,7 +423,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, if (over_lmqt == time_after(ent->timer.expires, lmqt) && ent->src_query_rexmit_cnt > 0) { - ihv3->srcs[lmqt_srcs++] = ent->addr.u.ip4; + ihv3->srcs[lmqt_srcs++] = ent->addr.src.ip4; ent->src_query_rexmit_cnt--; if (need_rexmit && ent->src_query_rexmit_cnt) *need_rexmit = true; @@ -584,7 +584,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, if (over_llqt == time_after(ent->timer.expires, llqt) && ent->src_query_rexmit_cnt > 0) { - mld2q->mld2q_srcs[llqt_srcs++] = ent->addr.u.ip6; + mld2q->mld2q_srcs[llqt_srcs++] = ent->addr.src.ip6; ent->src_query_rexmit_cnt--; if (need_rexmit && ent->src_query_rexmit_cnt) *need_rexmit = true; @@ -717,13 +717,13 @@ br_multicast_find_group_src(struct net_bridge_port_group *pg, struct br_ip *ip) switch (ip->proto) { case htons(ETH_P_IP): hlist_for_each_entry(ent, &pg->src_list, node) - if (ip->u.ip4 == ent->addr.u.ip4) + if (ip->src.ip4 == ent->addr.src.ip4) return ent; break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): hlist_for_each_entry(ent, &pg->src_list, node) - if (!ipv6_addr_cmp(&ent->addr.u.ip6, &ip->u.ip6)) + if (!ipv6_addr_cmp(&ent->addr.src.ip6, &ip->src.ip6)) return ent; break; #endif @@ -742,14 +742,14 @@ br_multicast_new_group_src(struct net_bridge_port_group *pg, struct br_ip *src_i switch (src_ip->proto) { case htons(ETH_P_IP): - if (ipv4_is_zeronet(src_ip->u.ip4) || - ipv4_is_multicast(src_ip->u.ip4)) + if (ipv4_is_zeronet(src_ip->src.ip4) || + ipv4_is_multicast(src_ip->src.ip4)) return NULL; break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - if (ipv6_addr_any(&src_ip->u.ip6) || - ipv6_addr_is_multicast(&src_ip->u.ip6)) + if (ipv6_addr_any(&src_ip->src.ip6) || + ipv6_addr_is_multicast(&src_ip->src.ip6)) return NULL; break; #endif @@ -1019,10 +1019,10 @@ static void br_multicast_select_own_querier(struct net_bridge *br, struct sk_buff *skb) { if (ip->proto == htons(ETH_P_IP)) - br->ip4_querier.addr.u.ip4 = ip_hdr(skb)->saddr; + br->ip4_querier.addr.src.ip4 = ip_hdr(skb)->saddr; #if IS_ENABLED(CONFIG_IPV6) else - br->ip6_querier.addr.u.ip6 = ipv6_hdr(skb)->saddr; + br->ip6_querier.addr.src.ip6 = ipv6_hdr(skb)->saddr; #endif } @@ -1399,7 +1399,7 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (!ent) { ent = br_multicast_new_group_src(pg, &src_ip); @@ -1433,7 +1433,7 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) ent->flags &= ~BR_SGRP_F_DELETE; @@ -1467,7 +1467,7 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { ent->flags &= ~BR_SGRP_F_DELETE; @@ -1530,7 +1530,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { ent->flags &= ~BR_SGRP_F_SEND; @@ -1573,7 +1573,7 @@ static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { if (timer_pending(&ent->timer)) { @@ -1634,7 +1634,7 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { ent->flags = (ent->flags & ~BR_SGRP_F_DELETE) | @@ -1672,7 +1672,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { ent->flags &= ~BR_SGRP_F_DELETE; @@ -1736,7 +1736,7 @@ static void __grp_src_block_incl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (ent) { ent->flags |= BR_SGRP_F_SEND; @@ -1770,7 +1770,7 @@ static bool __grp_src_block_excl(struct net_bridge_port_group *pg, memset(&src_ip, 0, sizeof(src_ip)); src_ip.proto = pg->addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { - memcpy(&src_ip.u, srcs, src_size); + memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); if (!ent) { ent = br_multicast_new_group_src(pg, &src_ip); @@ -2071,16 +2071,16 @@ static bool br_ip4_multicast_select_querier(struct net_bridge *br, !timer_pending(&br->ip4_other_query.timer)) goto update; - if (!br->ip4_querier.addr.u.ip4) + if (!br->ip4_querier.addr.src.ip4) goto update; - if (ntohl(saddr) <= ntohl(br->ip4_querier.addr.u.ip4)) + if (ntohl(saddr) <= ntohl(br->ip4_querier.addr.src.ip4)) goto update; return false; update: - br->ip4_querier.addr.u.ip4 = saddr; + br->ip4_querier.addr.src.ip4 = saddr; /* update protected by general multicast_lock by caller */ rcu_assign_pointer(br->ip4_querier.port, port); @@ -2097,13 +2097,13 @@ static bool br_ip6_multicast_select_querier(struct net_bridge *br, !timer_pending(&br->ip6_other_query.timer)) goto update; - if (ipv6_addr_cmp(saddr, &br->ip6_querier.addr.u.ip6) <= 0) + if (ipv6_addr_cmp(saddr, &br->ip6_querier.addr.src.ip6) <= 0) goto update; return false; update: - br->ip6_querier.addr.u.ip6 = *saddr; + br->ip6_querier.addr.src.ip6 = *saddr; /* update protected by general multicast_lock by caller */ rcu_assign_pointer(br->ip6_querier.port, port); @@ -2118,10 +2118,10 @@ static bool br_multicast_select_querier(struct net_bridge *br, { switch (saddr->proto) { case htons(ETH_P_IP): - return br_ip4_multicast_select_querier(br, port, saddr->u.ip4); + return br_ip4_multicast_select_querier(br, port, saddr->src.ip4); #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - return br_ip6_multicast_select_querier(br, port, &saddr->u.ip6); + return br_ip6_multicast_select_querier(br, port, &saddr->src.ip6); #endif } @@ -2263,7 +2263,7 @@ static void br_ip4_multicast_query(struct net_bridge *br, if (!group) { saddr.proto = htons(ETH_P_IP); - saddr.u.ip4 = iph->saddr; + saddr.src.ip4 = iph->saddr; br_multicast_query_received(br, port, &br->ip4_other_query, &saddr, max_delay); @@ -2351,7 +2351,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, if (is_general_query) { saddr.proto = htons(ETH_P_IPV6); - saddr.u.ip6 = ipv6_hdr(skb)->saddr; + saddr.src.ip6 = ipv6_hdr(skb)->saddr; br_multicast_query_received(br, port, &br->ip6_other_query, &saddr, max_delay); -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 06/16] net: bridge: mcast: rename br_ip's u member to dst
From: Nikolay Aleksandrov <nikolay at nvidia.com> Since now we have src in br_ip, u no longer makes sense so rename it to dst. No functional changes. v2: fix build with CONFIG_BATMAN_ADV_MCAST CC: Marek Lindner <mareklindner at neomailbox.ch> CC: Simon Wunderlich <sw at simonwunderlich.de> CC: Antonio Quartulli <a at unstable.cc> CC: Sven Eckelmann <sven at narfation.org> CC: b.a.t.m.a.n at lists.open-mesh.org Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/linux/if_bridge.h | 2 +- net/batman-adv/multicast.c | 14 +++++++------- net/bridge/br_mdb.c | 16 ++++++++-------- net/bridge/br_multicast.c | 26 +++++++++++++------------- 4 files changed, 29 insertions(+), 29 deletions(-) diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index 4fb9c4954f3a..556caed00258 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -25,7 +25,7 @@ struct br_ip { #if IS_ENABLED(CONFIG_IPV6) struct in6_addr ip6; #endif - } u; + } dst; __be16 proto; __u16 vid; }; diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c index 1622c3f5898f..7dda0f7b3d96 100644 --- a/net/batman-adv/multicast.c +++ b/net/batman-adv/multicast.c @@ -220,7 +220,7 @@ static u8 batadv_mcast_mla_rtr_flags_bridge_get(struct batadv_priv *bat_priv, * address here, only IPv6 ones */ if (br_ip_entry->addr.proto == htons(ETH_P_IPV6) && - ipv6_addr_is_ll_all_routers(&br_ip_entry->addr.u.ip6)) + ipv6_addr_is_ll_all_routers(&br_ip_entry->addr.dst.ip6)) flags &= ~BATADV_MCAST_WANT_NO_RTR6; list_del(&br_ip_entry->list); @@ -561,10 +561,10 @@ batadv_mcast_mla_softif_get(struct net_device *dev, static void batadv_mcast_mla_br_addr_cpy(char *dst, const struct br_ip *src) { if (src->proto == htons(ETH_P_IP)) - ip_eth_mc_map(src->u.ip4, dst); + ip_eth_mc_map(src->dst.ip4, dst); #if IS_ENABLED(CONFIG_IPV6) else if (src->proto == htons(ETH_P_IPV6)) - ipv6_eth_mc_map(&src->u.ip6, dst); + ipv6_eth_mc_map(&src->dst.ip6, dst); #endif else eth_zero_addr(dst); @@ -608,11 +608,11 @@ static int batadv_mcast_mla_bridge_get(struct net_device *dev, continue; if (tvlv_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES && - ipv4_is_local_multicast(br_ip_entry->addr.u.ip4)) + ipv4_is_local_multicast(br_ip_entry->addr.dst.ip4)) continue; if (!(tvlv_flags & BATADV_MCAST_WANT_NO_RTR4) && - !ipv4_is_local_multicast(br_ip_entry->addr.u.ip4)) + !ipv4_is_local_multicast(br_ip_entry->addr.dst.ip4)) continue; } @@ -622,11 +622,11 @@ static int batadv_mcast_mla_bridge_get(struct net_device *dev, continue; if (tvlv_flags & BATADV_MCAST_WANT_ALL_UNSNOOPABLES && - ipv6_addr_is_ll_all_nodes(&br_ip_entry->addr.u.ip6)) + ipv6_addr_is_ll_all_nodes(&br_ip_entry->addr.dst.ip6)) continue; if (!(tvlv_flags & BATADV_MCAST_WANT_NO_RTR6) && - IPV6_ADDR_MC_SCOPE(&br_ip_entry->addr.u.ip6) > + IPV6_ADDR_MC_SCOPE(&br_ip_entry->addr.dst.ip6) > IPV6_ADDR_SCOPE_LINKLOCAL) continue; } diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 269ffd2e549b..a1ff0a372185 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -70,10 +70,10 @@ static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip) ip->vid = entry->vid; ip->proto = entry->addr.proto; if (ip->proto == htons(ETH_P_IP)) - ip->u.ip4 = entry->addr.u.ip4; + ip->dst.ip4 = entry->addr.u.ip4; #if IS_ENABLED(CONFIG_IPV6) else - ip->u.ip6 = entry->addr.u.ip6; + ip->dst.ip6 = entry->addr.u.ip6; #endif } @@ -158,10 +158,10 @@ static int __mdb_fill_info(struct sk_buff *skb, e.ifindex = ifindex; e.vid = mp->addr.vid; if (mp->addr.proto == htons(ETH_P_IP)) - e.addr.u.ip4 = mp->addr.u.ip4; + e.addr.u.ip4 = mp->addr.dst.ip4; #if IS_ENABLED(CONFIG_IPV6) if (mp->addr.proto == htons(ETH_P_IPV6)) - e.addr.u.ip6 = mp->addr.u.ip6; + e.addr.u.ip6 = mp->addr.dst.ip6; #endif e.addr.proto = mp->addr.proto; nest_ent = nla_nest_start_noflag(skb, @@ -474,10 +474,10 @@ static void br_mdb_switchdev_host_port(struct net_device *dev, }; if (mp->addr.proto == htons(ETH_P_IP)) - ip_eth_mc_map(mp->addr.u.ip4, mdb.addr); + ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr); #if IS_ENABLED(CONFIG_IPV6) else - ipv6_eth_mc_map(&mp->addr.u.ip6, mdb.addr); + ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); #endif mdb.obj.orig_dev = dev; @@ -520,10 +520,10 @@ void br_mdb_notify(struct net_device *dev, if (pg) { if (mp->addr.proto == htons(ETH_P_IP)) - ip_eth_mc_map(mp->addr.u.ip4, mdb.addr); + ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr); #if IS_ENABLED(CONFIG_IPV6) else - ipv6_eth_mc_map(&mp->addr.u.ip6, mdb.addr); + ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); #endif mdb.obj.orig_dev = pg->port->dev; switch (type) { diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index a899c22c8f57..e1fb822b9ddb 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -86,7 +86,7 @@ static struct net_bridge_mdb_entry *br_mdb_ip4_get(struct net_bridge *br, struct br_ip br_dst; memset(&br_dst, 0, sizeof(br_dst)); - br_dst.u.ip4 = dst; + br_dst.dst.ip4 = dst; br_dst.proto = htons(ETH_P_IP); br_dst.vid = vid; @@ -101,7 +101,7 @@ static struct net_bridge_mdb_entry *br_mdb_ip6_get(struct net_bridge *br, struct br_ip br_dst; memset(&br_dst, 0, sizeof(br_dst)); - br_dst.u.ip6 = *dst; + br_dst.dst.ip6 = *dst; br_dst.proto = htons(ETH_P_IPV6); br_dst.vid = vid; @@ -126,11 +126,11 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, switch (skb->protocol) { case htons(ETH_P_IP): - ip.u.ip4 = ip_hdr(skb)->daddr; + ip.dst.ip4 = ip_hdr(skb)->daddr; break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - ip.u.ip6 = ipv6_hdr(skb)->daddr; + ip.dst.ip6 = ipv6_hdr(skb)->daddr; break; #endif default: @@ -625,9 +625,9 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, switch (group->proto) { case htons(ETH_P_IP): - ip4_dst = ip_dst ? ip_dst->u.ip4 : htonl(INADDR_ALLHOSTS_GROUP); + ip4_dst = ip_dst ? ip_dst->dst.ip4 : htonl(INADDR_ALLHOSTS_GROUP); return br_ip4_multicast_alloc_query(br, pg, - ip4_dst, group->u.ip4, + ip4_dst, group->dst.ip4, with_srcs, over_lmqt, sflag, igmp_type, need_rexmit); @@ -636,13 +636,13 @@ static struct sk_buff *br_multicast_alloc_query(struct net_bridge *br, struct in6_addr ip6_dst; if (ip_dst) - ip6_dst = ip_dst->u.ip6; + ip6_dst = ip_dst->dst.ip6; else ipv6_addr_set(&ip6_dst, htonl(0xff020000), 0, 0, htonl(1)); return br_ip6_multicast_alloc_query(br, pg, - &ip6_dst, &group->u.ip6, + &ip6_dst, &group->dst.ip6, with_srcs, over_lmqt, sflag, igmp_type, need_rexmit); @@ -906,7 +906,7 @@ static int br_ip4_multicast_add_group(struct net_bridge *br, return 0; memset(&br_group, 0, sizeof(br_group)); - br_group.u.ip4 = group; + br_group.dst.ip4 = group; br_group.proto = htons(ETH_P_IP); br_group.vid = vid; filter_mode = igmpv2 ? MCAST_EXCLUDE : MCAST_INCLUDE; @@ -930,7 +930,7 @@ static int br_ip6_multicast_add_group(struct net_bridge *br, return 0; memset(&br_group, 0, sizeof(br_group)); - br_group.u.ip6 = *group; + br_group.dst.ip6 = *group; br_group.proto = htons(ETH_P_IPV6); br_group.vid = vid; filter_mode = mldv1 ? MCAST_EXCLUDE : MCAST_INCLUDE; @@ -1079,7 +1079,7 @@ static void br_multicast_send_query(struct net_bridge *br, !br_opt_get(br, BROPT_MULTICAST_QUERIER)) return; - memset(&br_group.u, 0, sizeof(br_group.u)); + memset(&br_group.dst, 0, sizeof(br_group.dst)); if (port ? (own_query == &port->ip4_own_query) : (own_query == &br->ip4_own_query)) { @@ -2506,7 +2506,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br, own_query = port ? &port->ip4_own_query : &br->ip4_own_query; memset(&br_group, 0, sizeof(br_group)); - br_group.u.ip4 = group; + br_group.dst.ip4 = group; br_group.proto = htons(ETH_P_IP); br_group.vid = vid; @@ -2530,7 +2530,7 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, own_query = port ? &port->ip6_own_query : &br->ip6_own_query; memset(&br_group, 0, sizeof(br_group)); - br_group.u.ip6 = *group; + br_group.dst.ip6 = *group; br_group.proto = htons(ETH_P_IPV6); br_group.vid = vid; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 07/16] net: bridge: mdb: add support to extend add/del commands
From: Nikolay Aleksandrov <nikolay at nvidia.com> Since the MDB add/del code expects an exact struct br_mdb_entry we can't really add any extensions, thus add a new nested attribute at the level of MDBA_SET_ENTRY called MDBA_SET_ENTRY_ATTRS which will be used to pass all new options via netlink attributes. This patch doesn't change anything functionally since the new attribute is not used yet, only parsed. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/uapi/linux/if_bridge.h | 12 ++++++++++++ net/bridge/br_mdb.c | 22 +++++++++++++++++++--- 2 files changed, 31 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 75a2ac479247..dc52f8cffa0d 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -530,10 +530,22 @@ struct br_mdb_entry { enum { MDBA_SET_ENTRY_UNSPEC, MDBA_SET_ENTRY, + MDBA_SET_ENTRY_ATTRS, __MDBA_SET_ENTRY_MAX, }; #define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1) +/* [MDBA_SET_ENTRY_ATTRS] = { + * [MDBE_ATTR_xxx] + * ... + * } + */ +enum { + MDBE_ATTR_UNSPEC, + __MDBE_ATTR_MAX, +}; +#define MDBE_ATTR_MAX (__MDBE_ATTR_MAX - 1) + /* Embedded inside LINK_XSTATS_TYPE_BRIDGE */ enum { BRIDGE_XSTATS_UNSPEC, diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index a1ff0a372185..907df6d695ec 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -670,9 +670,12 @@ static bool is_valid_mdb_entry(struct br_mdb_entry *entry, return true; } +static const struct nla_policy br_mdbe_attrs_pol[MDBE_ATTR_MAX + 1] = { +}; + static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_device **pdev, struct br_mdb_entry **pentry, - struct netlink_ext_ack *extack) + struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { struct net *net = sock_net(skb->sk); struct br_mdb_entry *entry; @@ -719,6 +722,17 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, return -EINVAL; *pentry = entry; + if (tb[MDBA_SET_ENTRY_ATTRS]) { + err = nla_parse_nested(mdb_attrs, MDBE_ATTR_MAX, + tb[MDBA_SET_ENTRY_ATTRS], + br_mdbe_attrs_pol, extack); + if (err) + return err; + } else { + memset(mdb_attrs, 0, + sizeof(struct nlattr *) * (MDBE_ATTR_MAX + 1)); + } + return 0; } @@ -803,6 +817,7 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct netlink_ext_ack *extack) { + struct nlattr *mdb_attrs[MDBE_ATTR_MAX + 1]; struct net *net = sock_net(skb->sk); struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; @@ -812,7 +827,7 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry, extack); + err = br_mdb_parse(skb, nlh, &dev, &entry, mdb_attrs, extack); if (err < 0) return err; @@ -921,6 +936,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct netlink_ext_ack *extack) { + struct nlattr *mdb_attrs[MDBE_ATTR_MAX + 1]; struct net *net = sock_net(skb->sk); struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; @@ -930,7 +946,7 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry, extack); + err = br_mdb_parse(skb, nlh, &dev, &entry, mdb_attrs, extack); if (err < 0) return err; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 08/16] net: bridge: mdb: add support for add/del/dump of entries with source
From: Nikolay Aleksandrov <nikolay at nvidia.com> Add new mdb attributes (MDBE_ATTR_SOURCE for setting, MDBA_MDB_EATTR_SOURCE for dumping) to allow add/del and dump of mdb entries with a source address (S,G). New S,G entries are created with filter mode of MCAST_INCLUDE. The same attributes are used for IPv4 and IPv6, they're validated and parsed based on their protocol. S,G host joined entries which are added by user are not allowed yet. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/uapi/linux/if_bridge.h | 2 + net/bridge/br_mdb.c | 142 ++++++++++++++++++++++++++------- net/bridge/br_private.h | 14 ++++ 3 files changed, 130 insertions(+), 28 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index dc52f8cffa0d..3e6377c865eb 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -457,6 +457,7 @@ enum { MDBA_MDB_EATTR_TIMER, MDBA_MDB_EATTR_SRC_LIST, MDBA_MDB_EATTR_GROUP_MODE, + MDBA_MDB_EATTR_SOURCE, __MDBA_MDB_EATTR_MAX }; #define MDBA_MDB_EATTR_MAX (__MDBA_MDB_EATTR_MAX - 1) @@ -542,6 +543,7 @@ enum { */ enum { MDBE_ATTR_UNSPEC, + MDBE_ATTR_SOURCE, __MDBE_ATTR_MAX, }; #define MDBE_ATTR_MAX (__MDBE_ATTR_MAX - 1) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 907df6d695ec..7f9ca5c20120 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -64,17 +64,27 @@ static void __mdb_entry_fill_flags(struct br_mdb_entry *e, unsigned char flags) e->flags |= MDB_FLAGS_FAST_LEAVE; } -static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip) +static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip, + struct nlattr **mdb_attrs) { memset(ip, 0, sizeof(struct br_ip)); ip->vid = entry->vid; ip->proto = entry->addr.proto; - if (ip->proto == htons(ETH_P_IP)) + switch (ip->proto) { + case htons(ETH_P_IP): ip->dst.ip4 = entry->addr.u.ip4; + if (mdb_attrs && mdb_attrs[MDBE_ATTR_SOURCE]) + ip->src.ip4 = nla_get_in_addr(mdb_attrs[MDBE_ATTR_SOURCE]); + break; #if IS_ENABLED(CONFIG_IPV6) - else + case htons(ETH_P_IPV6): ip->dst.ip6 = entry->addr.u.ip6; + if (mdb_attrs && mdb_attrs[MDBE_ATTR_SOURCE]) + ip->src.ip6 = nla_get_in6_addr(mdb_attrs[MDBE_ATTR_SOURCE]); + break; #endif + } + } static int __mdb_fill_srcs(struct sk_buff *skb, @@ -172,30 +182,41 @@ static int __mdb_fill_info(struct sk_buff *skb, if (nla_put_nohdr(skb, sizeof(e), &e) || nla_put_u32(skb, MDBA_MDB_EATTR_TIMER, - br_timer_value(mtimer))) { - nla_nest_cancel(skb, nest_ent); - return -EMSGSIZE; - } + br_timer_value(mtimer))) + goto nest_err; switch (mp->addr.proto) { case htons(ETH_P_IP): - dump_srcs_mode = !!(p && mp->br->multicast_igmp_version == 3); + dump_srcs_mode = !!(mp->br->multicast_igmp_version == 3); + if (mp->addr.src.ip4) { + if (nla_put_in_addr(skb, MDBA_MDB_EATTR_SOURCE, + mp->addr.src.ip4)) + goto nest_err; + break; + } break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): - dump_srcs_mode = !!(p && mp->br->multicast_mld_version == 2); + dump_srcs_mode = !!(mp->br->multicast_mld_version == 2); + if (!ipv6_addr_any(&mp->addr.src.ip6)) { + if (nla_put_in6_addr(skb, MDBA_MDB_EATTR_SOURCE, + &mp->addr.src.ip6)) + goto nest_err; + break; + } break; #endif } - if (dump_srcs_mode && + if (p && dump_srcs_mode && (__mdb_fill_srcs(skb, p) || - nla_put_u8(skb, MDBA_MDB_EATTR_GROUP_MODE, p->filter_mode))) { - nla_nest_cancel(skb, nest_ent); - return -EMSGSIZE; - } - + nla_put_u8(skb, MDBA_MDB_EATTR_GROUP_MODE, p->filter_mode))) + goto nest_err; nla_nest_end(skb, nest_ent); return 0; + +nest_err: + nla_nest_cancel(skb, nest_ent); + return -EMSGSIZE; } static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb, @@ -395,12 +416,18 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) switch (pg->addr.proto) { case htons(ETH_P_IP): + /* MDBA_MDB_EATTR_SOURCE */ + if (pg->addr.src.ip4) + nlmsg_size += nla_total_size(sizeof(__be32)); if (pg->port->br->multicast_igmp_version == 2) goto out; addr_size = sizeof(__be32); break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): + /* MDBA_MDB_EATTR_SOURCE */ + if (!ipv6_addr_any(&pg->addr.src.ip6)) + nlmsg_size += nla_total_size(sizeof(struct in6_addr)); if (pg->port->br->multicast_mld_version == 1) goto out; addr_size = sizeof(struct in6_addr); @@ -670,7 +697,48 @@ static bool is_valid_mdb_entry(struct br_mdb_entry *entry, return true; } +static bool is_valid_mdb_source(struct nlattr *attr, __be16 proto, + struct netlink_ext_ack *extack) +{ + switch (proto) { + case htons(ETH_P_IP): + if (nla_len(attr) != sizeof(struct in_addr)) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 invalid source address length"); + return false; + } + if (ipv4_is_multicast(nla_get_in_addr(attr))) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 multicast source address is not allowed"); + return false; + } + break; +#if IS_ENABLED(CONFIG_IPV6) + case htons(ETH_P_IPV6): { + struct in6_addr src; + + if (nla_len(attr) != sizeof(struct in6_addr)) { + NL_SET_ERR_MSG_MOD(extack, "IPv6 invalid source address length"); + return false; + } + src = nla_get_in6_addr(attr); + if (ipv6_addr_is_multicast(&src)) { + NL_SET_ERR_MSG_MOD(extack, "IPv6 multicast source address is not allowed"); + return false; + } + break; + } +#endif + default: + NL_SET_ERR_MSG_MOD(extack, "Invalid protocol used with source address"); + return false; + } + + return true; +} + static const struct nla_policy br_mdbe_attrs_pol[MDBE_ATTR_MAX + 1] = { + [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY, + sizeof(struct in_addr), + sizeof(struct in6_addr)), }; static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, @@ -728,6 +796,10 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, br_mdbe_attrs_pol, extack); if (err) return err; + if (mdb_attrs[MDBE_ATTR_SOURCE] && + !is_valid_mdb_source(mdb_attrs[MDBE_ATTR_SOURCE], + entry->addr.proto, extack)) + return -EINVAL; } else { memset(mdb_attrs, 0, sizeof(struct nlattr *) * (MDBE_ATTR_MAX + 1)); @@ -744,8 +816,22 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, struct net_bridge_port_group *p; struct net_bridge_port_group __rcu **pp; unsigned long now = jiffies; + u8 filter_mode; int err; + /* host join errors which can happen before creating the group */ + if (!port) { + /* don't allow any flags for host-joined groups */ + if (entry->state) { + NL_SET_ERR_MSG_MOD(extack, "Flags are not allowed for host groups"); + return -EINVAL; + } + if (!br_multicast_is_star_g(group)) { + NL_SET_ERR_MSG_MOD(extack, "Groups with sources cannot be manually host joined"); + return -EINVAL; + } + } + mp = br_mdb_ip_get(br, group); if (!mp) { mp = br_multicast_new_group(br, group); @@ -756,11 +842,6 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, /* host join */ if (!port) { - /* don't allow any flags for host-joined groups */ - if (entry->state) { - NL_SET_ERR_MSG_MOD(extack, "Flags are not allowed for host groups"); - return -EINVAL; - } if (mp->host_joined) { NL_SET_ERR_MSG_MOD(extack, "Group is already joined by host"); return -EEXIST; @@ -783,8 +864,11 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, break; } + filter_mode = br_multicast_is_star_g(group) ? MCAST_EXCLUDE : + MCAST_INCLUDE; + p = br_multicast_new_port_group(port, group, *pp, entry->state, NULL, - MCAST_EXCLUDE); + filter_mode); if (unlikely(!p)) { NL_SET_ERR_MSG_MOD(extack, "Couldn't allocate new port group"); return -ENOMEM; @@ -800,12 +884,13 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, static int __br_mdb_add(struct net *net, struct net_bridge *br, struct net_bridge_port *p, struct br_mdb_entry *entry, + struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { struct br_ip ip; int ret; - __mdb_entry_to_br_ip(entry, &ip); + __mdb_entry_to_br_ip(entry, &ip, mdb_attrs); spin_lock_bh(&br->multicast_lock); ret = br_mdb_add_group(br, p, &ip, entry, extack); @@ -875,18 +960,19 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, if (br_vlan_enabled(br->dev) && vg && entry->vid == 0) { list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; - err = __br_mdb_add(net, br, p, entry, extack); + err = __br_mdb_add(net, br, p, entry, mdb_attrs, extack); if (err) break; } } else { - err = __br_mdb_add(net, br, p, entry, extack); + err = __br_mdb_add(net, br, p, entry, mdb_attrs, extack); } return err; } -static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) +static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry, + struct nlattr **mdb_attrs) { struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; @@ -897,7 +983,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) return -EINVAL; - __mdb_entry_to_br_ip(entry, &ip); + __mdb_entry_to_br_ip(entry, &ip, mdb_attrs); spin_lock_bh(&br->multicast_lock); mp = br_mdb_ip_get(br, &ip); @@ -971,10 +1057,10 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, if (br_vlan_enabled(br->dev) && vg && entry->vid == 0) { list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; - err = __br_mdb_del(br, entry); + err = __br_mdb_del(br, entry, mdb_attrs); } } else { - err = __br_mdb_del(br, entry); + err = __br_mdb_del(br, entry, mdb_attrs); } return err; diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index a23d2bae56e1..0f54a7a7c186 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -873,6 +873,20 @@ static inline bool br_multicast_querier_exists(struct net_bridge *br, } } +static inline bool br_multicast_is_star_g(const struct br_ip *ip) +{ + switch (ip->proto) { + case htons(ETH_P_IP): + return ipv4_is_zeronet(ip->src.ip4); +#if IS_ENABLED(CONFIG_IPV6) + case htons(ETH_P_IPV6): + return ipv6_addr_any(&ip->src.ip6); +#endif + default: + return false; + } +} + static inline int br_multicast_igmp_type(const struct sk_buff *skb) { return BR_INPUT_SKB_CB(skb)->igmp; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 09/16] net: bridge: mcast: when igmpv3/mldv2 are enabled lookup (S, G) first, then (*, G)
From: Nikolay Aleksandrov <nikolay at nvidia.com> If (S,G) entries are enabled (igmpv3/mldv2) then look them up first. If there isn't a present (S,G) entry then try to find (*,G). Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_multicast.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index e1fb822b9ddb..4fd690bc848f 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -127,10 +127,28 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, switch (skb->protocol) { case htons(ETH_P_IP): ip.dst.ip4 = ip_hdr(skb)->daddr; + if (br->multicast_igmp_version == 3) { + struct net_bridge_mdb_entry *mdb; + + ip.src.ip4 = ip_hdr(skb)->saddr; + mdb = br_mdb_ip_get_rcu(br, &ip); + if (mdb) + return mdb; + ip.src.ip4 = 0; + } break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): ip.dst.ip6 = ipv6_hdr(skb)->daddr; + if (br->multicast_mld_version == 2) { + struct net_bridge_mdb_entry *mdb; + + ip.src.ip6 = ipv6_hdr(skb)->saddr; + mdb = br_mdb_ip_get_rcu(br, &ip); + if (mdb) + return mdb; + memset(&ip.src.ip6, 0, sizeof(ip.src.ip6)); + } break; #endif default: -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 10/16] net: bridge: mcast: add rt_protocol field to the port group struct
From: Nikolay Aleksandrov <nikolay at nvidia.com> We need to be able to differentiate between pg entries created by user-space and the kernel when we start generating S,G entries for IGMPv3/MLDv2's fast path. User-space entries are created by default as RTPROT_STATIC and the kernel entries are RTPROT_KERNEL. Later we can allow user-space to provide the entry rt_protocol so we can differentiate between who added the entries specifically (e.g. clag, admin, frr etc). Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_mdb.c | 42 +++++++++++++++++++++------------- net/bridge/br_multicast.c | 7 ++++-- net/bridge/br_private.h | 3 ++- 4 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 3e6377c865eb..1054f151078d 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -458,6 +458,7 @@ enum { MDBA_MDB_EATTR_SRC_LIST, MDBA_MDB_EATTR_GROUP_MODE, MDBA_MDB_EATTR_SOURCE, + MDBA_MDB_EATTR_RTPROT, __MDBA_MDB_EATTR_MAX }; #define MDBA_MDB_EATTR_MAX (__MDBA_MDB_EATTR_MAX - 1) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 7f9ca5c20120..b386a5e07698 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -184,6 +184,7 @@ static int __mdb_fill_info(struct sk_buff *skb, MDBA_MDB_EATTR_TIMER, br_timer_value(mtimer))) goto nest_err; + switch (mp->addr.proto) { case htons(ETH_P_IP): dump_srcs_mode = !!(mp->br->multicast_igmp_version == 3); @@ -206,10 +207,15 @@ static int __mdb_fill_info(struct sk_buff *skb, break; #endif } - if (p && dump_srcs_mode && - (__mdb_fill_srcs(skb, p) || - nla_put_u8(skb, MDBA_MDB_EATTR_GROUP_MODE, p->filter_mode))) - goto nest_err; + if (p) { + if (nla_put_u8(skb, MDBA_MDB_EATTR_RTPROT, p->rt_protocol)) + goto nest_err; + if (dump_srcs_mode && + (__mdb_fill_srcs(skb, p) || + nla_put_u8(skb, MDBA_MDB_EATTR_GROUP_MODE, + p->filter_mode))) + goto nest_err; + } nla_nest_end(skb, nest_ent); return 0; @@ -414,6 +420,9 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) if (!pg) goto out; + /* MDBA_MDB_EATTR_RTPROT */ + nlmsg_size += nla_total_size(sizeof(u8)); + switch (pg->addr.proto) { case htons(ETH_P_IP): /* MDBA_MDB_EATTR_SOURCE */ @@ -809,16 +818,20 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, } static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, - struct br_ip *group, struct br_mdb_entry *entry, + struct br_mdb_entry *entry, + struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; struct net_bridge_port_group __rcu **pp; unsigned long now = jiffies; + struct br_ip group; u8 filter_mode; int err; + __mdb_entry_to_br_ip(entry, &group, mdb_attrs); + /* host join errors which can happen before creating the group */ if (!port) { /* don't allow any flags for host-joined groups */ @@ -826,15 +839,15 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, NL_SET_ERR_MSG_MOD(extack, "Flags are not allowed for host groups"); return -EINVAL; } - if (!br_multicast_is_star_g(group)) { + if (!br_multicast_is_star_g(&group)) { NL_SET_ERR_MSG_MOD(extack, "Groups with sources cannot be manually host joined"); return -EINVAL; } } - mp = br_mdb_ip_get(br, group); + mp = br_mdb_ip_get(br, &group); if (!mp) { - mp = br_multicast_new_group(br, group); + mp = br_multicast_new_group(br, &group); err = PTR_ERR_OR_ZERO(mp); if (err) return err; @@ -864,11 +877,11 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, break; } - filter_mode = br_multicast_is_star_g(group) ? MCAST_EXCLUDE : - MCAST_INCLUDE; + filter_mode = br_multicast_is_star_g(&group) ? MCAST_EXCLUDE : + MCAST_INCLUDE; - p = br_multicast_new_port_group(port, group, *pp, entry->state, NULL, - filter_mode); + p = br_multicast_new_port_group(port, &group, *pp, entry->state, NULL, + filter_mode, RTPROT_STATIC); if (unlikely(!p)) { NL_SET_ERR_MSG_MOD(extack, "Couldn't allocate new port group"); return -ENOMEM; @@ -887,13 +900,10 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { - struct br_ip ip; int ret; - __mdb_entry_to_br_ip(entry, &ip, mdb_attrs); - spin_lock_bh(&br->multicast_lock); - ret = br_mdb_add_group(br, p, &ip, entry, extack); + ret = br_mdb_add_group(br, p, entry, mdb_attrs, extack); spin_unlock_bh(&br->multicast_lock); return ret; diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 4fd690bc848f..b6e7b0ece422 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -795,7 +795,8 @@ struct net_bridge_port_group *br_multicast_new_port_group( struct net_bridge_port_group __rcu *next, unsigned char flags, const unsigned char *src, - u8 filter_mode) + u8 filter_mode, + u8 rt_protocol) { struct net_bridge_port_group *p; @@ -807,6 +808,7 @@ struct net_bridge_port_group *br_multicast_new_port_group( p->port = port; p->flags = flags; p->filter_mode = filter_mode; + p->rt_protocol = rt_protocol; p->mcast_gc.destroy = br_multicast_destroy_port_group; INIT_HLIST_HEAD(&p->src_list); rcu_assign_pointer(p->next, next); @@ -892,7 +894,8 @@ static int br_multicast_add_group(struct net_bridge *br, break; } - p = br_multicast_new_port_group(port, group, *pp, 0, src, filter_mode); + p = br_multicast_new_port_group(port, group, *pp, 0, src, filter_mode, + RTPROT_KERNEL); if (unlikely(!p)) goto err; rcu_assign_pointer(*pp, p); diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 0f54a7a7c186..dae7e3526fc7 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -246,6 +246,7 @@ struct net_bridge_port_group { unsigned char flags; unsigned char filter_mode; unsigned char grp_query_rexmit_cnt; + unsigned char rt_protocol; struct hlist_head src_list; unsigned int src_ents; @@ -804,7 +805,7 @@ struct net_bridge_port_group * br_multicast_new_port_group(struct net_bridge_port *port, struct br_ip *group, struct net_bridge_port_group __rcu *next, unsigned char flags, const unsigned char *src, - u8 filter_mode); + u8 filter_mode, u8 rt_protocol); int br_mdb_hash_init(struct net_bridge *br); void br_mdb_hash_fini(struct net_bridge *br); void br_mdb_notify(struct net_device *dev, struct net_bridge_mdb_entry *mp, -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 11/16] net: bridge: mcast: add sg_port rhashtable
From: Nikolay Aleksandrov <nikolay at nvidia.com> To speedup S,G forward handling we need to be able to quickly find out if a port is a member of an S,G group. To do that add a global S,G port rhashtable with key: source addr, group addr, protocol, vid (all br_ip fields) and port pointer. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_forward.c | 2 +- net/bridge/br_mdb.c | 34 +++++----- net/bridge/br_multicast.c | 130 +++++++++++++++++++++++++------------- net/bridge/br_private.h | 10 ++- 4 files changed, 111 insertions(+), 65 deletions(-) diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index 7629b63f6f30..4d12999e4576 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -281,7 +281,7 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, while (p || rp) { struct net_bridge_port *port, *lport, *rport; - lport = p ? p->port : NULL; + lport = p ? p->key.port : NULL; rport = hlist_entry_safe(rp, struct net_bridge_port, rlist); if ((unsigned long)lport > (unsigned long)rport) { diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index b386a5e07698..4e3a5cefc626 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -101,7 +101,7 @@ static int __mdb_fill_srcs(struct sk_buff *skb, return -EMSGSIZE; hlist_for_each_entry_rcu(ent, &p->src_list, node, - lockdep_is_held(&p->port->br->multicast_lock)) { + lockdep_is_held(&p->key.port->br->multicast_lock)) { nest_ent = nla_nest_start(skb, MDBA_MDB_SRCLIST_ENTRY); if (!nest_ent) goto out_cancel_err; @@ -156,7 +156,7 @@ static int __mdb_fill_info(struct sk_buff *skb, memset(&e, 0, sizeof(e)); if (p) { - ifindex = p->port->dev->ifindex; + ifindex = p->key.port->dev->ifindex; mtimer = &p->timer; flags = p->flags; } else { @@ -263,7 +263,7 @@ static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb, for (pp = &mp->ports; (p = rcu_dereference(*pp)) != NULL; pp = &p->next) { - if (!p->port) + if (!p->key.port) continue; if (pidx < s_pidx) goto skip_pg; @@ -423,21 +423,21 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) /* MDBA_MDB_EATTR_RTPROT */ nlmsg_size += nla_total_size(sizeof(u8)); - switch (pg->addr.proto) { + switch (pg->key.addr.proto) { case htons(ETH_P_IP): /* MDBA_MDB_EATTR_SOURCE */ - if (pg->addr.src.ip4) + if (pg->key.addr.src.ip4) nlmsg_size += nla_total_size(sizeof(__be32)); - if (pg->port->br->multicast_igmp_version == 2) + if (pg->key.port->br->multicast_igmp_version == 2) goto out; addr_size = sizeof(__be32); break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): /* MDBA_MDB_EATTR_SOURCE */ - if (!ipv6_addr_any(&pg->addr.src.ip6)) + if (!ipv6_addr_any(&pg->key.addr.src.ip6)) nlmsg_size += nla_total_size(sizeof(struct in6_addr)); - if (pg->port->br->multicast_mld_version == 1) + if (pg->key.port->br->multicast_mld_version == 1) goto out; addr_size = sizeof(struct in6_addr); break; @@ -486,7 +486,7 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv) goto out; for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port != port) + if (p->key.port != port) continue; p->flags |= MDB_PG_FLAGS_OFFLOAD; } @@ -561,21 +561,21 @@ void br_mdb_notify(struct net_device *dev, else ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); #endif - mdb.obj.orig_dev = pg->port->dev; + mdb.obj.orig_dev = pg->key.port->dev; switch (type) { case RTM_NEWMDB: complete_info = kmalloc(sizeof(*complete_info), GFP_ATOMIC); if (!complete_info) break; - complete_info->port = pg->port; + complete_info->port = pg->key.port; complete_info->ip = mp->addr; mdb.obj.complete_priv = complete_info; mdb.obj.complete = br_mdb_complete; - if (switchdev_port_obj_add(pg->port->dev, &mdb.obj, NULL)) + if (switchdev_port_obj_add(pg->key.port->dev, &mdb.obj, NULL)) kfree(complete_info); break; case RTM_DELMDB: - switchdev_port_obj_del(pg->port->dev, &mdb.obj); + switchdev_port_obj_del(pg->key.port->dev, &mdb.obj); break; } } else { @@ -869,11 +869,11 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port == port) { + if (p->key.port == port) { NL_SET_ERR_MSG_MOD(extack, "Group is already joined by port"); return -EEXIST; } - if ((unsigned long)p->port < (unsigned long)port) + if ((unsigned long)p->key.port < (unsigned long)port) break; } @@ -1013,10 +1013,10 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (!p->port || p->port->dev->ifindex != entry->ifindex) + if (!p->key.port || p->key.port->dev->ifindex != entry->ifindex) continue; - if (p->port->state == BR_STATE_DISABLED) + if (p->key.port->state == BR_STATE_DISABLED) goto unlock; br_multicast_del_pg(mp, p, pp); diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index b6e7b0ece422..0fec9f38787c 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -41,6 +41,13 @@ static const struct rhashtable_params br_mdb_rht_params = { .automatic_shrinking = true, }; +static const struct rhashtable_params br_sg_port_rht_params = { + .head_offset = offsetof(struct net_bridge_port_group, rhnode), + .key_offset = offsetof(struct net_bridge_port_group, key), + .key_len = sizeof(struct net_bridge_port_group_sg_key), + .automatic_shrinking = true, +}; + static void br_multicast_start_querier(struct net_bridge *br, struct bridge_mcast_own_query *query); static void br_multicast_add_router(struct net_bridge *br, @@ -60,6 +67,16 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, __u16 vid, const unsigned char *src); #endif +static struct net_bridge_port_group * +br_sg_port_find(struct net_bridge *br, + struct net_bridge_port_group_sg_key *sg_p) +{ + lockdep_assert_held_once(&br->multicast_lock); + + return rhashtable_lookup_fast(&br->sg_port_tbl, sg_p, + br_sg_port_rht_params); +} + static struct net_bridge_mdb_entry *br_mdb_ip_get_rcu(struct net_bridge *br, struct br_ip *dst) { @@ -212,7 +229,7 @@ static void br_multicast_destroy_group_src(struct net_bridge_mcast_gc *gc) static void br_multicast_del_group_src(struct net_bridge_group_src *src) { - struct net_bridge *br = src->pg->port->br; + struct net_bridge *br = src->pg->key.port->br; hlist_del_init_rcu(&src->node); src->pg->src_ents--; @@ -237,10 +254,12 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, struct net_bridge_port_group *pg, struct net_bridge_port_group __rcu **pp) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; struct hlist_node *tmp; + rhashtable_remove_fast(&br->sg_port_tbl, &pg->rhnode, + br_sg_port_rht_params); rcu_assign_pointer(*pp, pg->next); hlist_del_init(&pg->mglist); hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) @@ -260,7 +279,7 @@ static void br_multicast_find_del_pg(struct net_bridge *br, struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; - mp = br_mdb_ip_get(br, &pg->addr); + mp = br_mdb_ip_get(br, &pg->key.addr); if (WARN_ON(!mp)) return; @@ -281,7 +300,7 @@ static void br_multicast_port_group_expired(struct timer_list *t) { struct net_bridge_port_group *pg = from_timer(pg, t, timer); struct net_bridge_group_src *src_ent; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct hlist_node *tmp; bool changed; @@ -302,7 +321,7 @@ static void br_multicast_port_group_expired(struct timer_list *t) if (hlist_empty(&pg->src_list)) { br_multicast_find_del_pg(br, pg); } else if (changed) { - struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->addr); + struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->key.addr); if (WARN_ON(!mp)) goto out; @@ -330,7 +349,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_port *p = pg ? pg->port : NULL; + struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, igmp_hdr_size; unsigned long now = jiffies; @@ -476,7 +495,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_port *p = pg ? pg->port : NULL; + struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, mld_hdr_size; unsigned long now = jiffies; @@ -778,7 +797,7 @@ br_multicast_new_group_src(struct net_bridge_port_group *pg, struct br_ip *src_i return NULL; grp_src->pg = pg; - grp_src->br = pg->port->br; + grp_src->br = pg->key.port->br; grp_src->addr = *src_ip; grp_src->mcast_gc.destroy = br_multicast_destroy_group_src; timer_setup(&grp_src->timer, br_multicast_group_src_expired, 0); @@ -804,13 +823,21 @@ struct net_bridge_port_group *br_multicast_new_port_group( if (unlikely(!p)) return NULL; - p->addr = *group; - p->port = port; + p->key.addr = *group; + p->key.port = port; p->flags = flags; p->filter_mode = filter_mode; p->rt_protocol = rt_protocol; p->mcast_gc.destroy = br_multicast_destroy_port_group; INIT_HLIST_HEAD(&p->src_list); + + if (!br_multicast_is_star_g(group) && + rhashtable_lookup_insert_fast(&port->br->sg_port_tbl, &p->rhnode, + br_sg_port_rht_params)) { + kfree(p); + return NULL; + } + rcu_assign_pointer(p->next, next); timer_setup(&p->timer, br_multicast_port_group_expired, 0); timer_setup(&p->rexmit_timer, br_multicast_port_group_rexmit, 0); @@ -828,7 +855,7 @@ static bool br_port_group_equal(struct net_bridge_port_group *p, struct net_bridge_port *port, const unsigned char *src) { - if (p->port != port) + if (p->key.port != port) return false; if (!(port->flags & BR_MULTICAST_TO_UNICAST)) @@ -890,7 +917,7 @@ static int br_multicast_add_group(struct net_bridge *br, pp = &p->next) { if (br_port_group_equal(p, port, src)) goto found; - if ((unsigned long)p->port < (unsigned long)port) + if ((unsigned long)p->key.port < (unsigned long)port) break; } @@ -1166,7 +1193,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) { struct net_bridge_port_group *pg = from_timer(pg, t, rexmit_timer); struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool need_rexmit = false; spin_lock(&br->multicast_lock); @@ -1175,7 +1202,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) !br_opt_get(br, BROPT_MULTICAST_QUERIER)) goto out; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1187,11 +1214,11 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) if (pg->grp_query_rexmit_cnt) { pg->grp_query_rexmit_cnt--; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, false, 1, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, false, 1, NULL); } - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, true, 0, &need_rexmit); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, true, 0, &need_rexmit); if (pg->grp_query_rexmit_cnt || need_rexmit) mod_timer(&pg->rexmit_timer, jiffies + @@ -1325,7 +1352,7 @@ static int __grp_src_delete_marked(struct net_bridge_port_group *pg) static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 lmqc = br->multicast_last_member_count; unsigned long lmqt, lmi, now = jiffies; struct net_bridge_group_src *ent; @@ -1334,7 +1361,7 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) !br_opt_get(br, BROPT_MULTICAST_ENABLED)) return; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1359,8 +1386,8 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) !other_query || timer_pending(&other_query->timer)) return; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, true, 1, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, true, 1, NULL); lmi = now + br->multicast_last_member_interval; if (!timer_pending(&pg->rexmit_timer) || @@ -1371,14 +1398,14 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; unsigned long now = jiffies, lmi; if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) return; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1389,8 +1416,8 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) other_query && !timer_pending(&other_query->timer)) { lmi = now + br->multicast_last_member_interval; pg->grp_query_rexmit_cnt = br->multicast_last_member_count - 1; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, false, 0, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, false, 0, NULL); if (!timer_pending(&pg->rexmit_timer) || time_after(pg->rexmit_timer.expires, lmi)) mod_timer(&pg->rexmit_timer, lmi); @@ -1410,7 +1437,7 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -1418,7 +1445,7 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, u32 src_idx; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1452,7 +1479,7 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1475,7 +1502,7 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -1486,7 +1513,7 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1512,7 +1539,7 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, static bool br_multicast_isexc(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { @@ -1538,7 +1565,7 @@ static bool br_multicast_isexc(struct net_bridge_port_group *pg, static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -1549,7 +1576,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1580,7 +1607,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -1592,7 +1619,7 @@ static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1653,7 +1680,7 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, ent->flags = (ent->flags & ~BR_SGRP_F_SEND) | BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1691,7 +1718,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, ent->flags = (ent->flags & ~BR_SGRP_F_SEND) | BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1722,7 +1749,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, static bool br_multicast_toex(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { @@ -1755,7 +1782,7 @@ static void __grp_src_block_incl(struct net_bridge_port_group *pg, ent->flags &= ~BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1770,7 +1797,7 @@ static void __grp_src_block_incl(struct net_bridge_port_group *pg, __grp_src_query_marked_and_rexmit(pg); if (pg->filter_mode == MCAST_INCLUDE && hlist_empty(&pg->src_list)) - br_multicast_find_del_pg(pg->port->br, pg); + br_multicast_find_del_pg(pg->key.port->br, pg); } /* State Msg type New state Actions @@ -1789,7 +1816,7 @@ static bool __grp_src_block_excl(struct net_bridge_port_group *pg, ent->flags &= ~BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -2496,7 +2523,7 @@ br_multicast_leave_group(struct net_bridge *br, for (p = mlock_dereference(mp->ports, br); p != NULL; p = mlock_dereference(p->next, br)) { - if (p->port != port) + if (p->key.port != port) continue; if (!hlist_unhashed(&p->mglist) && @@ -3256,7 +3283,7 @@ int br_multicast_list_adjacent(struct net_device *dev, if (!entry) goto unlock; - entry->addr = group->addr; + entry->addr = group->key.addr; list_add(&entry->list, br_ip_list); count++; } @@ -3513,10 +3540,23 @@ void br_multicast_get_stats(const struct net_bridge *br, int br_mdb_hash_init(struct net_bridge *br) { - return rhashtable_init(&br->mdb_hash_tbl, &br_mdb_rht_params); + int err; + + err = rhashtable_init(&br->sg_port_tbl, &br_sg_port_rht_params); + if (err) + return err; + + err = rhashtable_init(&br->mdb_hash_tbl, &br_mdb_rht_params); + if (err) { + rhashtable_destroy(&br->sg_port_tbl); + return err; + } + + return 0; } void br_mdb_hash_fini(struct net_bridge *br) { + rhashtable_destroy(&br->sg_port_tbl); rhashtable_destroy(&br->mdb_hash_tbl); } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index dae7e3526fc7..55486b4956d3 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -238,10 +238,14 @@ struct net_bridge_group_src { struct rcu_head rcu; }; -struct net_bridge_port_group { +struct net_bridge_port_group_sg_key { struct net_bridge_port *port; - struct net_bridge_port_group __rcu *next; struct br_ip addr; +}; + +struct net_bridge_port_group { + struct net_bridge_port_group __rcu *next; + struct net_bridge_port_group_sg_key key; unsigned char eth_addr[ETH_ALEN] __aligned(2); unsigned char flags; unsigned char filter_mode; @@ -254,6 +258,7 @@ struct net_bridge_port_group { struct timer_list rexmit_timer; struct hlist_node mglist; + struct rhash_head rhnode; struct net_bridge_mcast_gc mcast_gc; struct rcu_head rcu; }; @@ -441,6 +446,7 @@ struct net_bridge { unsigned long multicast_startup_query_interval; struct rhashtable mdb_hash_tbl; + struct rhashtable sg_port_tbl; struct hlist_head mcast_gc_list; struct hlist_head mdb_list; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 12/16] net: bridge: mcast: install S, G entries automatically based on reports
From: Nikolay Aleksandrov <nikolay at nvidia.com> This patch adds support for automatic install of S,G mdb entries based on the port group's source list and the source entry's timer. Once installed the S,G will be used when forwarding packets if the approprate multicast/mld versions are set. A new source flag called BR_SGRP_F_INSTALLED denotes if the source has a forwarding mdb entry installed. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_multicast.c | 176 +++++++++++++++++++++++++++++--------- net/bridge/br_private.h | 1 + 2 files changed, 138 insertions(+), 39 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 0fec9f38787c..ece8ac805e98 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -66,6 +66,13 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, const struct in6_addr *group, __u16 vid, const unsigned char *src); #endif +static struct net_bridge_port_group * +__br_multicast_add_group(struct net_bridge *br, + struct net_bridge_port *port, + struct br_ip *group, + const unsigned char *src, + u8 filter_mode, + bool igmpv2_mldv1); static struct net_bridge_port_group * br_sg_port_find(struct net_bridge *br, @@ -175,6 +182,81 @@ struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, return br_mdb_ip_get_rcu(br, &ip); } +static bool br_port_group_equal(struct net_bridge_port_group *p, + struct net_bridge_port *port, + const unsigned char *src) +{ + if (p->key.port != port) + return false; + + if (!(port->flags & BR_MULTICAST_TO_UNICAST)) + return true; + + return ether_addr_equal(src, p->eth_addr); +} + +static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) +{ + struct net_bridge_port_group *sg; + struct br_ip sg_ip; + + if (src->flags & BR_SGRP_F_INSTALLED) + return; + + memset(&sg_ip, 0, sizeof(sg_ip)); + sg_ip = src->pg->key.addr; + sg_ip.src = src->addr.src; + sg = __br_multicast_add_group(src->br, src->pg->key.port, &sg_ip, + src->pg->eth_addr, MCAST_INCLUDE, false); + if (IS_ERR_OR_NULL(sg)) + return; + src->flags |= BR_SGRP_F_INSTALLED; + + /* if it was added by user-space as perm we can skip next steps */ + if (sg->rt_protocol != RTPROT_KERNEL && + (sg->flags & MDB_PG_FLAGS_PERMANENT)) + return; + + /* the kernel is now responsible for removing this S,G */ + del_timer(&sg->timer); +} + +static void br_multicast_fwd_src_remove(struct net_bridge_group_src *src) +{ + struct net_bridge_port_group *p, *pg = src->pg; + struct net_bridge_port_group __rcu **pp; + struct net_bridge_mdb_entry *mp; + struct br_ip sg_ip; + + memset(&sg_ip, 0, sizeof(sg_ip)); + sg_ip = pg->key.addr; + sg_ip.src = src->addr.src; + + mp = br_mdb_ip_get(src->br, &sg_ip); + if (!mp) + return; + + for (pp = &mp->ports; + (p = mlock_dereference(*pp, src->br)) != NULL; + pp = &p->next) { + if (!br_port_group_equal(p, pg->key.port, pg->eth_addr)) + continue; + + if (p->rt_protocol != RTPROT_KERNEL && + (p->flags & MDB_PG_FLAGS_PERMANENT)) + break; + + br_multicast_del_pg(mp, p, pp); + break; + } + src->flags &= ~BR_SGRP_F_INSTALLED; +} + +static void br_multicast_fwd_src_handle(struct net_bridge_group_src *src) +{ + br_multicast_fwd_src_add(src); +} + static void br_multicast_destroy_mdb_entry(struct net_bridge_mcast_gc *gc) { struct net_bridge_mdb_entry *mp; @@ -204,7 +286,8 @@ static void br_multicast_group_expired(struct timer_list *t) struct net_bridge *br = mp->br; spin_lock(&br->multicast_lock); - if (!netif_running(br->dev) || timer_pending(&mp->timer)) + if (hlist_unhashed(&mp->mdb_node) || !netif_running(br->dev) || + timer_pending(&mp->timer)) goto out; br_multicast_host_leave(mp, true); @@ -231,6 +314,7 @@ static void br_multicast_del_group_src(struct net_bridge_group_src *src) { struct net_bridge *br = src->pg->key.port->br; + br_multicast_fwd_src_remove(src); hlist_del_init_rcu(&src->node); src->pg->src_ents--; hlist_add_head(&src->mcast_gc.gc_node, &br->mcast_gc_list); @@ -851,19 +935,6 @@ struct net_bridge_port_group *br_multicast_new_port_group( return p; } -static bool br_port_group_equal(struct net_bridge_port_group *p, - struct net_bridge_port *port, - const unsigned char *src) -{ - if (p->key.port != port) - return false; - - if (!(port->flags & BR_MULTICAST_TO_UNICAST)) - return true; - - return ether_addr_equal(src, p->eth_addr); -} - void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify) { if (!mp->host_joined) { @@ -884,28 +955,26 @@ void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) br_mdb_notify(mp->br->dev, mp, NULL, RTM_DELMDB); } -static int br_multicast_add_group(struct net_bridge *br, - struct net_bridge_port *port, - struct br_ip *group, - const unsigned char *src, - u8 filter_mode, - bool igmpv2_mldv1) +static struct net_bridge_port_group * +__br_multicast_add_group(struct net_bridge *br, + struct net_bridge_port *port, + struct br_ip *group, + const unsigned char *src, + u8 filter_mode, + bool igmpv2_mldv1) { struct net_bridge_port_group __rcu **pp; - struct net_bridge_port_group *p; + struct net_bridge_port_group *p = NULL; struct net_bridge_mdb_entry *mp; unsigned long now = jiffies; - int err; - spin_lock(&br->multicast_lock); if (!netif_running(br->dev) || (port && port->state == BR_STATE_DISABLED)) goto out; mp = br_multicast_new_group(br, group); - err = PTR_ERR(mp); if (IS_ERR(mp)) - goto err; + return ERR_PTR(PTR_ERR(mp)); if (!port) { br_multicast_host_join(mp, true); @@ -923,8 +992,10 @@ static int br_multicast_add_group(struct net_bridge *br, p = br_multicast_new_port_group(port, group, *pp, 0, src, filter_mode, RTPROT_KERNEL); - if (unlikely(!p)) - goto err; + if (unlikely(!p)) { + p = ERR_PTR(-ENOMEM); + goto out; + } rcu_assign_pointer(*pp, p); br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); @@ -933,10 +1004,26 @@ static int br_multicast_add_group(struct net_bridge *br, mod_timer(&p->timer, now + br->multicast_membership_interval); out: - err = 0; + return p; +} + +static int br_multicast_add_group(struct net_bridge *br, + struct net_bridge_port *port, + struct br_ip *group, + const unsigned char *src, + u8 filter_mode, + bool igmpv2_mldv1) +{ + struct net_bridge_port_group *pg; + int err; -err: + spin_lock(&br->multicast_lock); + pg = __br_multicast_add_group(br, port, group, src, filter_mode, + igmpv2_mldv1); + /* NULL is considered valid for host joined groups */ + err = IS_ERR(pg) ? PTR_ERR(pg) : 0; spin_unlock(&br->multicast_lock); + return err; } @@ -1349,6 +1436,13 @@ static int __grp_src_delete_marked(struct net_bridge_port_group *pg) return deleted; } +static void __grp_src_mod_timer(struct net_bridge_group_src *src, + unsigned long expires) +{ + mod_timer(&src->timer, expires); + br_multicast_fwd_src_handle(src); +} + static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; @@ -1377,7 +1471,7 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) other_query && !timer_pending(&other_query->timer)) ent->src_query_rexmit_cnt = lmqc; - mod_timer(&ent->timer, lmqt); + __grp_src_mod_timer(ent, lmqt); } } } @@ -1456,7 +1550,7 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, } if (ent) - mod_timer(&ent->timer, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); srcs += src_size; } @@ -1486,7 +1580,9 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, if (ent) ent->flags &= ~BR_SGRP_F_DELETE; else - br_multicast_new_group_src(pg, &src_ip); + ent = br_multicast_new_group_src(pg, &src_ip); + if (ent) + br_multicast_fwd_src_handle(ent); srcs += src_size; } @@ -1522,8 +1618,8 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, } else { ent = br_multicast_new_group_src(pg, &src_ip); if (ent) { - mod_timer(&ent->timer, - now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, + now + br_multicast_gmi(br)); changed = true; } } @@ -1589,7 +1685,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, changed = true; } if (ent) - mod_timer(&ent->timer, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); srcs += src_size; } @@ -1634,7 +1730,7 @@ static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, changed = true; } if (ent) - mod_timer(&ent->timer, now + br_multicast_gmi(br)); + __grp_src_mod_timer(ent, now + br_multicast_gmi(br)); srcs += src_size; } @@ -1689,8 +1785,10 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, BR_SGRP_F_SEND; to_send++; } else { - br_multicast_new_group_src(pg, &src_ip); + ent = br_multicast_new_group_src(pg, &src_ip); } + if (ent) + br_multicast_fwd_src_handle(ent); srcs += src_size; } @@ -1727,7 +1825,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, } else { ent = br_multicast_new_group_src(pg, &src_ip); if (ent) { - mod_timer(&ent->timer, pg->timer.expires); + __grp_src_mod_timer(ent, pg->timer.expires); changed = true; } } @@ -1823,7 +1921,7 @@ static bool __grp_src_block_excl(struct net_bridge_port_group *pg, if (!ent) { ent = br_multicast_new_group_src(pg, &src_ip); if (ent) { - mod_timer(&ent->timer, pg->timer.expires); + __grp_src_mod_timer(ent, pg->timer.expires); changed = true; } } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 55486b4956d3..93d76b3dfc35 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -218,6 +218,7 @@ struct net_bridge_fdb_entry { #define BR_SGRP_F_DELETE BIT(0) #define BR_SGRP_F_SEND BIT(1) +#define BR_SGRP_F_INSTALLED BIT(2) struct net_bridge_mcast_gc { struct hlist_node gc_node; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 13/16] net: bridge: mcast: handle port group filter modes
From: Nikolay Aleksandrov <nikolay at nvidia.com> We need to handle group filter mode transitions and initial state. To change a port group's INCLUDE -> EXCLUDE mode (or when we have added a new port group in EXCLUDE mode) we need to add that port to all of *,G ports' S,G entries for proper replication. When the EXCLUDE state is changed from IGMPv3 report, br_multicast_fwd_filter_exclude() must be called after the source list processing because the assumption is that all of the group's S,G entries will be created before transitioning to EXCLUDE mode, i.e. most importantly its blocked entries will already be added so it will not get automatically added to them. The transition EXCLUDE -> INCLUDE happens only when a port group timer expires, it requires us to remove that port from all of *,G ports' S,G entries where it was automatically added previously. Finally when we are adding a new S,G entry we must add all of *,G's EXCLUDE ports to it. In order to distinguish automatically added *,G EXCLUDE ports we have a new port group flag - MDB_PG_FLAGS_STAR_EXCL. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_mdb.c | 25 ++++- net/bridge/br_multicast.c | 172 +++++++++++++++++++++++++++++++++ net/bridge/br_private.h | 20 ++++ 4 files changed, 216 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 1054f151078d..e4bd30a25f6b 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -518,6 +518,7 @@ struct br_mdb_entry { __u8 state; #define MDB_FLAGS_OFFLOAD (1 << 0) #define MDB_FLAGS_FAST_LEAVE (1 << 1) +#define MDB_FLAGS_STAR_EXCL (1 << 2) __u8 flags; __u16 vid; struct { diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 4e3a5cefc626..28cd35a9cf37 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -62,6 +62,8 @@ static void __mdb_entry_fill_flags(struct br_mdb_entry *e, unsigned char flags) e->flags |= MDB_FLAGS_OFFLOAD; if (flags & MDB_PG_FLAGS_FAST_LEAVE) e->flags |= MDB_FLAGS_FAST_LEAVE; + if (flags & MDB_PG_FLAGS_STAR_EXCL) + e->flags |= MDB_FLAGS_STAR_EXCL; } static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip, @@ -822,11 +824,11 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { - struct net_bridge_mdb_entry *mp; + struct net_bridge_mdb_entry *mp, *star_mp; struct net_bridge_port_group *p; struct net_bridge_port_group __rcu **pp; + struct br_ip group, star_group; unsigned long now = jiffies; - struct br_ip group; u8 filter_mode; int err; @@ -890,6 +892,25 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, if (entry->state == MDB_TEMPORARY) mod_timer(&p->timer, now + br->multicast_membership_interval); br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); + /* if we are adding a new EXCLUDE port group (*,G) it needs to be also + * added to all S,G entries for proper replication, if we are adding + * a new INCLUDE port (S,G) then all of *,G EXCLUDE ports need to be + * added to it for proper replication + */ + if (br_multicast_should_handle_mode(br, group.proto)) { + switch (filter_mode) { + case MCAST_EXCLUDE: + br_multicast_star_g_handle_mode(p, MCAST_EXCLUDE); + break; + case MCAST_INCLUDE: + star_group = p->key.addr; + memset(&star_group.src, 0, sizeof(star_group.src)); + star_mp = br_mdb_ip_get(br, &star_group); + if (star_mp) + br_multicast_sg_add_exclude_ports(star_mp, p); + break; + } + } return 0; } diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ece8ac805e98..f39bbd733722 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -73,6 +73,8 @@ __br_multicast_add_group(struct net_bridge *br, const unsigned char *src, u8 filter_mode, bool igmpv2_mldv1); +static void br_multicast_find_del_pg(struct net_bridge *br, + struct net_bridge_port_group *pg); static struct net_bridge_port_group * br_sg_port_find(struct net_bridge *br, @@ -195,8 +197,163 @@ static bool br_port_group_equal(struct net_bridge_port_group *p, return ether_addr_equal(src, p->eth_addr); } +static void __fwd_add_star_excl(struct net_bridge_port_group *pg, + struct br_ip *sg_ip) +{ + struct net_bridge_port_group_sg_key sg_key; + struct net_bridge *br = pg->key.port->br; + struct net_bridge_port_group *src_pg; + + memset(&sg_key, 0, sizeof(sg_key)); + sg_key.port = pg->key.port; + sg_key.addr = *sg_ip; + if (br_sg_port_find(br, &sg_key)) + return; + + src_pg = __br_multicast_add_group(br, pg->key.port, sg_ip, pg->eth_addr, + MCAST_INCLUDE, false); + if (IS_ERR_OR_NULL(src_pg) || + src_pg->rt_protocol != RTPROT_KERNEL) + return; + + src_pg->flags |= MDB_PG_FLAGS_STAR_EXCL; +} + +static void __fwd_del_star_excl(struct net_bridge_port_group *pg, + struct br_ip *sg_ip) +{ + struct net_bridge_port_group_sg_key sg_key; + struct net_bridge *br = pg->key.port->br; + struct net_bridge_port_group *src_pg; + + memset(&sg_key, 0, sizeof(sg_key)); + sg_key.port = pg->key.port; + sg_key.addr = *sg_ip; + src_pg = br_sg_port_find(br, &sg_key); + if (!src_pg || !(src_pg->flags & MDB_PG_FLAGS_STAR_EXCL) || + src_pg->rt_protocol != RTPROT_KERNEL) + return; + + br_multicast_find_del_pg(br, src_pg); +} + +/* When a port group transitions to (or is added as) EXCLUDE we need to add it + * to all other ports' S,G entries which are not blocked by the current group + * for proper replication, the assumption is that any S,G blocked entries + * are already added so the S,G,port lookup should skip them. + * When a port group transitions from EXCLUDE -> INCLUDE mode or is being + * deleted we need to remove it from all ports' S,G entries where it was + * automatically installed before (i.e. where it's MDB_PG_FLAGS_STAR_EXCL). + */ +void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, + u8 filter_mode) +{ + struct net_bridge *br = pg->key.port->br; + struct net_bridge_port_group *pg_lst; + struct net_bridge_mdb_entry *mp; + struct br_ip sg_ip; + + if (WARN_ON(!br_multicast_is_star_g(&pg->key.addr))) + return; + + mp = br_mdb_ip_get(br, &pg->key.addr); + if (!mp) + return; + + memset(&sg_ip, 0, sizeof(sg_ip)); + sg_ip = pg->key.addr; + for (pg_lst = mlock_dereference(mp->ports, br); + pg_lst; + pg_lst = mlock_dereference(pg_lst->next, br)) { + struct net_bridge_group_src *src_ent; + + if (pg_lst == pg) + continue; + hlist_for_each_entry(src_ent, &pg_lst->src_list, node) { + if (!(src_ent->flags & BR_SGRP_F_INSTALLED)) + continue; + sg_ip.src = src_ent->addr.src; + switch (filter_mode) { + case MCAST_INCLUDE: + __fwd_del_star_excl(pg, &sg_ip); + break; + case MCAST_EXCLUDE: + __fwd_add_star_excl(pg, &sg_ip); + break; + } + } + } +} + +static void br_multicast_sg_del_exclude_ports(struct net_bridge_mdb_entry *sgmp) +{ + struct net_bridge_port_group __rcu **pp; + struct net_bridge_port_group *p; + + /* *,G exclude ports are only added to S,G entries */ + if (WARN_ON(br_multicast_is_star_g(&sgmp->addr))) + return; + + /* we need the STAR_EXCLUDE ports if there are non-STAR_EXCLUDE ports + * we should ignore perm entries since they're managed by user-space + */ + for (pp = &sgmp->ports; + (p = mlock_dereference(*pp, sgmp->br)) != NULL; + pp = &p->next) + if (!(p->flags & (MDB_PG_FLAGS_STAR_EXCL | + MDB_PG_FLAGS_PERMANENT))) + return; + + for (pp = &sgmp->ports; + (p = mlock_dereference(*pp, sgmp->br)) != NULL;) { + if (!(p->flags & MDB_PG_FLAGS_PERMANENT)) + br_multicast_del_pg(sgmp, p, pp); + else + pp = &p->next; + } +} + +void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, + struct net_bridge_port_group *sg) +{ + struct net_bridge_port_group_sg_key sg_key; + struct net_bridge *br = star_mp->br; + struct net_bridge_port_group *pg; + + if (WARN_ON(br_multicast_is_star_g(&sg->key.addr))) + return; + if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) + return; + + memset(&sg_key, 0, sizeof(sg_key)); + sg_key.addr = sg->key.addr; + /* we need to add all exclude ports to the S,G */ + for (pg = mlock_dereference(star_mp->ports, br); + pg; + pg = mlock_dereference(pg->next, br)) { + struct net_bridge_port_group *src_pg; + + if (pg == sg || pg->filter_mode == MCAST_INCLUDE) + continue; + + sg_key.port = pg->key.port; + if (br_sg_port_find(br, &sg_key)) + continue; + + src_pg = __br_multicast_add_group(br, pg->key.port, + &sg->key.addr, + sg->eth_addr, + MCAST_INCLUDE, false); + if (IS_ERR_OR_NULL(src_pg) || + src_pg->rt_protocol != RTPROT_KERNEL) + continue; + src_pg->flags |= MDB_PG_FLAGS_STAR_EXCL; + } +} + static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) { + struct net_bridge_mdb_entry *star_mp; struct net_bridge_port_group *sg; struct br_ip sg_ip; @@ -211,6 +368,7 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) if (IS_ERR_OR_NULL(sg)) return; src->flags |= BR_SGRP_F_INSTALLED; + sg->flags &= ~MDB_PG_FLAGS_STAR_EXCL; /* if it was added by user-space as perm we can skip next steps */ if (sg->rt_protocol != RTPROT_KERNEL && @@ -219,6 +377,11 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) /* the kernel is now responsible for removing this S,G */ del_timer(&sg->timer); + star_mp = br_mdb_ip_get(src->br, &src->pg->key.addr); + if (!star_mp) + return; + + br_multicast_sg_add_exclude_ports(star_mp, sg); } static void br_multicast_fwd_src_remove(struct net_bridge_group_src *src) @@ -349,6 +512,10 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) br_multicast_del_group_src(ent); br_mdb_notify(br->dev, mp, pg, RTM_DELMDB); + if (!br_multicast_is_star_g(&mp->addr)) + br_multicast_sg_del_exclude_ports(mp); + else + br_multicast_star_g_handle_mode(pg, MCAST_INCLUDE); hlist_add_head(&pg->mcast_gc.gc_node, &br->mcast_gc_list); queue_work(system_long_wq, &br->mcast_gc_work); @@ -407,6 +574,9 @@ static void br_multicast_port_group_expired(struct timer_list *t) } else if (changed) { struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->key.addr); + if (changed && br_multicast_is_star_g(&pg->key.addr)) + br_multicast_star_g_handle_mode(pg, MCAST_INCLUDE); + if (WARN_ON(!mp)) goto out; br_mdb_notify(br->dev, mp, pg, RTM_NEWMDB); @@ -1641,6 +1811,7 @@ static bool br_multicast_isexc(struct net_bridge_port_group *pg, switch (pg->filter_mode) { case MCAST_INCLUDE: __grp_src_isexc_incl(pg, srcs, nsrcs, src_size); + br_multicast_star_g_handle_mode(pg, MCAST_EXCLUDE); changed = true; break; case MCAST_EXCLUDE: @@ -1853,6 +2024,7 @@ static bool br_multicast_toex(struct net_bridge_port_group *pg, switch (pg->filter_mode) { case MCAST_INCLUDE: __grp_src_toex_incl(pg, srcs, nsrcs, src_size); + br_multicast_star_g_handle_mode(pg, MCAST_EXCLUDE); changed = true; break; case MCAST_EXCLUDE: diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 93d76b3dfc35..128d2d0417a0 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -213,6 +213,7 @@ struct net_bridge_fdb_entry { #define MDB_PG_FLAGS_PERMANENT BIT(0) #define MDB_PG_FLAGS_OFFLOAD BIT(1) #define MDB_PG_FLAGS_FAST_LEAVE BIT(2) +#define MDB_PG_FLAGS_STAR_EXCL BIT(3) #define PG_SRC_ENT_LIMIT 32 @@ -833,6 +834,10 @@ void br_mdb_init(void); void br_mdb_uninit(void); void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify); void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify); +void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, + u8 filter_mode); +void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, + struct net_bridge_port_group *sg); #define mlock_dereference(X, br) \ rcu_dereference_protected(X, lockdep_is_held(&br->multicast_lock)) @@ -895,6 +900,21 @@ static inline bool br_multicast_is_star_g(const struct br_ip *ip) } } +static inline bool br_multicast_should_handle_mode(const struct net_bridge *br, + __be16 proto) +{ + switch (proto) { + case htons(ETH_P_IP): + return !!(br->multicast_igmp_version == 3); +#if IS_ENABLED(CONFIG_IPV6) + case htons(ETH_P_IPV6): + return !!(br->multicast_mld_version == 2); +#endif + default: + return false; + } +} + static inline int br_multicast_igmp_type(const struct sk_buff *skb) { return BR_INPUT_SKB_CB(skb)->igmp; -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 14/16] net: bridge: mcast: add support for blocked port groups
From: Nikolay Aleksandrov <nikolay at nvidia.com> When excluding S,G entries we need a way to block a particular S,G,port. The new port group flag is managed based on the source's timer as per RFCs 3376 and 3810. When a source expires and its port group is in EXCLUDE mode, it will be blocked. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_mdb.c | 2 ++ net/bridge/br_multicast.c | 49 +++++++++++++++++++++++++++++----- net/bridge/br_private.h | 1 + 4 files changed, 47 insertions(+), 6 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index e4bd30a25f6b..4c687686aa8f 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -519,6 +519,7 @@ struct br_mdb_entry { #define MDB_FLAGS_OFFLOAD (1 << 0) #define MDB_FLAGS_FAST_LEAVE (1 << 1) #define MDB_FLAGS_STAR_EXCL (1 << 2) +#define MDB_FLAGS_BLOCKED (1 << 3) __u8 flags; __u16 vid; struct { diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 28cd35a9cf37..e15bab19a012 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -64,6 +64,8 @@ static void __mdb_entry_fill_flags(struct br_mdb_entry *e, unsigned char flags) e->flags |= MDB_FLAGS_FAST_LEAVE; if (flags & MDB_PG_FLAGS_STAR_EXCL) e->flags |= MDB_FLAGS_STAR_EXCL; + if (flags & MDB_PG_FLAGS_BLOCKED) + e->flags |= MDB_FLAGS_BLOCKED; } static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip, diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index f39bbd733722..11d224c01914 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -72,7 +72,8 @@ __br_multicast_add_group(struct net_bridge *br, struct br_ip *group, const unsigned char *src, u8 filter_mode, - bool igmpv2_mldv1); + bool igmpv2_mldv1, + bool blocked); static void br_multicast_find_del_pg(struct net_bridge *br, struct net_bridge_port_group *pg); @@ -211,7 +212,7 @@ static void __fwd_add_star_excl(struct net_bridge_port_group *pg, return; src_pg = __br_multicast_add_group(br, pg->key.port, sg_ip, pg->eth_addr, - MCAST_INCLUDE, false); + MCAST_INCLUDE, false, false); if (IS_ERR_OR_NULL(src_pg) || src_pg->rt_protocol != RTPROT_KERNEL) return; @@ -343,7 +344,7 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, src_pg = __br_multicast_add_group(br, pg->key.port, &sg->key.addr, sg->eth_addr, - MCAST_INCLUDE, false); + MCAST_INCLUDE, false, false); if (IS_ERR_OR_NULL(src_pg) || src_pg->rt_protocol != RTPROT_KERNEL) continue; @@ -364,7 +365,8 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) sg_ip = src->pg->key.addr; sg_ip.src = src->addr.src; sg = __br_multicast_add_group(src->br, src->pg->key.port, &sg_ip, - src->pg->eth_addr, MCAST_INCLUDE, false); + src->pg->eth_addr, MCAST_INCLUDE, false, + !timer_pending(&src->timer)); if (IS_ERR_OR_NULL(sg)) return; src->flags |= BR_SGRP_F_INSTALLED; @@ -415,9 +417,38 @@ static void br_multicast_fwd_src_remove(struct net_bridge_group_src *src) src->flags &= ~BR_SGRP_F_INSTALLED; } +/* install S,G and based on src's timer enable or disable forwarding */ static void br_multicast_fwd_src_handle(struct net_bridge_group_src *src) { + struct net_bridge_port_group_sg_key sg_key; + struct net_bridge_port_group *sg; + u8 old_flags; + br_multicast_fwd_src_add(src); + + memset(&sg_key, 0, sizeof(sg_key)); + sg_key.addr = src->pg->key.addr; + sg_key.addr.src = src->addr.src; + sg_key.port = src->pg->key.port; + + sg = br_sg_port_find(src->br, &sg_key); + if (!sg || (sg->flags & MDB_PG_FLAGS_PERMANENT)) + return; + + old_flags = sg->flags; + if (timer_pending(&src->timer)) + sg->flags &= ~MDB_PG_FLAGS_BLOCKED; + else + sg->flags |= MDB_PG_FLAGS_BLOCKED; + + if (old_flags != sg->flags) { + struct net_bridge_mdb_entry *sg_mp; + + sg_mp = br_mdb_ip_get(src->br, &sg_key.addr); + if (!sg_mp) + return; + br_mdb_notify(src->br->dev, sg_mp, sg, RTM_NEWMDB); + } } static void br_multicast_destroy_mdb_entry(struct net_bridge_mcast_gc *gc) @@ -995,7 +1026,10 @@ static void br_multicast_group_src_expired(struct timer_list *t) if (!hlist_empty(&pg->src_list)) goto out; br_multicast_find_del_pg(br, pg); + } else { + br_multicast_fwd_src_handle(src); } + out: spin_unlock(&br->multicast_lock); } @@ -1131,7 +1165,8 @@ __br_multicast_add_group(struct net_bridge *br, struct br_ip *group, const unsigned char *src, u8 filter_mode, - bool igmpv2_mldv1) + bool igmpv2_mldv1, + bool blocked) { struct net_bridge_port_group __rcu **pp; struct net_bridge_port_group *p = NULL; @@ -1167,6 +1202,8 @@ __br_multicast_add_group(struct net_bridge *br, goto out; } rcu_assign_pointer(*pp, p); + if (blocked) + p->flags |= MDB_PG_FLAGS_BLOCKED; br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); found: @@ -1189,7 +1226,7 @@ static int br_multicast_add_group(struct net_bridge *br, spin_lock(&br->multicast_lock); pg = __br_multicast_add_group(br, port, group, src, filter_mode, - igmpv2_mldv1); + igmpv2_mldv1, false); /* NULL is considered valid for host joined groups */ err = IS_ERR(pg) ? PTR_ERR(pg) : 0; spin_unlock(&br->multicast_lock); diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 128d2d0417a0..345118e35c42 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -214,6 +214,7 @@ struct net_bridge_fdb_entry { #define MDB_PG_FLAGS_OFFLOAD BIT(1) #define MDB_PG_FLAGS_FAST_LEAVE BIT(2) #define MDB_PG_FLAGS_STAR_EXCL BIT(3) +#define MDB_PG_FLAGS_BLOCKED BIT(4) #define PG_SRC_ENT_LIMIT 32 -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 15/16] net: bridge: mcast: handle host state
From: Nikolay Aleksandrov <nikolay at nvidia.com> Since host joins are considered as EXCLUDE {} joins we need to reflect that in all of *,G ports' S,G entries. Since the S,Gs can have host_joined == true only set automatically we can safely set it to false when removing all automatically added entries upon S,G delete. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_multicast.c | 58 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 11d224c01914..66eb62ded192 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -286,6 +286,53 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, } } +/* called when adding a new S,G with host_joined == false by default */ +static void br_multicast_sg_host_state(struct net_bridge_mdb_entry *star_mp, + struct net_bridge_port_group *sg) +{ + struct net_bridge_mdb_entry *sg_mp; + + if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) + return; + if (!star_mp->host_joined) + return; + + sg_mp = br_mdb_ip_get(star_mp->br, &sg->key.addr); + if (!sg_mp) + return; + sg_mp->host_joined = true; +} + +/* set the host_joined state of all of *,G's S,G entries */ +static void br_multicast_star_g_host_state(struct net_bridge_mdb_entry *star_mp) +{ + struct net_bridge *br = star_mp->br; + struct net_bridge_mdb_entry *sg_mp; + struct net_bridge_port_group *pg; + struct br_ip sg_ip; + + if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) + return; + + memset(&sg_ip, 0, sizeof(sg_ip)); + sg_ip = star_mp->addr; + for (pg = mlock_dereference(star_mp->ports, br); + pg; + pg = mlock_dereference(pg->next, br)) { + struct net_bridge_group_src *src_ent; + + hlist_for_each_entry(src_ent, &pg->src_list, node) { + if (!(src_ent->flags & BR_SGRP_F_INSTALLED)) + continue; + sg_ip.src = src_ent->addr.src; + sg_mp = br_mdb_ip_get(br, &sg_ip); + if (!sg_mp) + continue; + sg_mp->host_joined = star_mp->host_joined; + } + } +} + static void br_multicast_sg_del_exclude_ports(struct net_bridge_mdb_entry *sgmp) { struct net_bridge_port_group __rcu **pp; @@ -305,6 +352,12 @@ static void br_multicast_sg_del_exclude_ports(struct net_bridge_mdb_entry *sgmp) MDB_PG_FLAGS_PERMANENT))) return; + /* currently the host can only have joined the *,G which means + * we treat it as EXCLUDE {}, so for an S,G it's considered a + * STAR_EXCLUDE entry and we can safely leave it + */ + sgmp->host_joined = false; + for (pp = &sgmp->ports; (p = mlock_dereference(*pp, sgmp->br)) != NULL;) { if (!(p->flags & MDB_PG_FLAGS_PERMANENT)) @@ -326,6 +379,7 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) return; + br_multicast_sg_host_state(star_mp, sg); memset(&sg_key, 0, sizeof(sg_key)); sg_key.addr = sg->key.addr; /* we need to add all exclude ports to the S,G */ @@ -1143,6 +1197,8 @@ void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify) { if (!mp->host_joined) { mp->host_joined = true; + if (br_multicast_is_star_g(&mp->addr)) + br_multicast_star_g_host_state(mp); if (notify) br_mdb_notify(mp->br->dev, mp, NULL, RTM_NEWMDB); } @@ -1155,6 +1211,8 @@ void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) return; mp->host_joined = false; + if (br_multicast_is_star_g(&mp->addr)) + br_multicast_star_g_host_state(mp); if (notify) br_mdb_notify(mp->br->dev, mp, NULL, RTM_DELMDB); } -- 2.25.4
Nikolay Aleksandrov
2020-Sep-22 07:30 UTC
[Bridge] [PATCH net-next v2 16/16] net: bridge: mcast: when forwarding handle filter mode and blocked flag
From: Nikolay Aleksandrov <nikolay at nvidia.com> We need to avoid forwarding to ports in MCAST_INCLUDE filter mode when the mdst entry is a *,G or when the port has the blocked flag. Signed-off-by: Nikolay Aleksandrov <nikolay at nvidia.com> --- net/bridge/br_forward.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index 4d12999e4576..e28ffadd1371 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -274,10 +274,19 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, struct net_bridge *br = netdev_priv(dev); struct net_bridge_port *prev = NULL; struct net_bridge_port_group *p; + bool allow_mode_include = true; struct hlist_node *rp; rp = rcu_dereference(hlist_first_rcu(&br->router_list)); - p = mdst ? rcu_dereference(mdst->ports) : NULL; + if (mdst) { + p = rcu_dereference(mdst->ports); + if (br_multicast_should_handle_mode(br, mdst->addr.proto) && + br_multicast_is_star_g(&mdst->addr)) + allow_mode_include = false; + } else { + p = NULL; + } + while (p || rp) { struct net_bridge_port *port, *lport, *rport; @@ -292,6 +301,10 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, local_orig); goto delivered; } + if ((!allow_mode_include && + p->filter_mode == MCAST_INCLUDE) || + (p->flags & MDB_PG_FLAGS_BLOCKED)) + goto delivered; } else { port = rport; } -- 2.25.4
David Miller
2020-Sep-23 20:25 UTC
[Bridge] [PATCH net-next v2 00/16] net: bridge: mcast: IGMPv3/MLDv2 fast-path (part 2)
From: Nikolay Aleksandrov <razor at blackwall.org> Date: Tue, 22 Sep 2020 10:30:11 +0300> This is the second part of the IGMPv3/MLDv2 support which adds support > for the fast-path.... Series applied to net-next and build testing, thanks Nikolay.