Sridhar Samudrala
2018-Apr-05 21:08 UTC
[RFC PATCH net-next v5 0/4] Enable virtio_net to act as a backup for a passthru device
The main motivation for this patch is to enable cloud service providers to provide an accelerated datapath to virtio-net enabled VMs in a transparent manner with no/minimal guest userspace changes. This also enables hypervisor controlled live migration to be supported with VMs that have direct attached SR-IOV VF devices. Patch 1 introduces a new feature bit VIRTIO_NET_F_BACKUP that can be used by hypervisor to indicate that virtio_net interface should act as a backup for another device with the same MAC address. Patch 2 introduces a bypass module that provides a generic interface for paravirtual drivers to listen for netdev register/unregister/link change events from pci ethernet devices with the same MAC and takeover their datapath. The notifier and event handling code is based on the existing netvsc implementation. A paravirtual driver can use this module by registering a set of ops and each instance of the device when it is probed. Patch 3 extends virtio_net to use alternate datapath when available and registered. When BACKUP feature is enabled, virtio_net driver creates an additional 'bypass' netdev that acts as a master device and controls 2 slave devices. The original virtio_net netdev is registered as 'backup' netdev and a passthru/vf device with the same MAC gets registered as 'active' netdev. Both 'bypass' and 'backup' netdevs are associated with the same 'pci' device. The user accesses the network interface via 'bypass' netdev. The 'bypass' netdev chooses 'active' netdev as default for transmits when it is available with link up and running. Patch 4 refactors netvsc to use the registration/notification framework supported by bypass module. As this patch series is initially focusing on usecases where hypervisor fully controls the VM networking and the guest is not expected to directly configure any hardware settings, it doesn't expose all the ndo/ethtool ops that are supported by virtio_net at this time. To support additional usecases, it should be possible to enable additional ops later by caching the state in virtio netdev and replaying when the 'active' netdev gets registered. The hypervisor needs to enable only one datapath at any time so that packets don't get looped back to the VM over the other datapath. When a VF is plugged, the virtio datapath link state can be marked as down. At the time of live migration, the hypervisor needs to unplug the VF device from the guest on the source host and reset the MAC filter of the VF to initiate failover of datapath to virtio before starting the migration. After the migration is completed, the destination hypervisor sets the MAC filter on the VF and plugs it back to the guest to switch over to VF datapath. This patch is based on the discussion initiated by Jesse on this thread. https://marc.info/?l=linux-virtualization&m=151189725224231&w=2 v5 RFC: Based on Jiri's comments, moved the common functionality to a 'bypass' module so that the same notifier and event handlers to handle child register/unregister/link change events can be shared between virtio_net and netvsc. Improved error handling based on Siwei's comments. v4: - Based on the review comments on the v3 version of the RFC patch and Jakub's suggestion for the naming issue with 3 netdev solution, proposed 3 netdev in-driver bonding solution for virtio-net. v3 RFC: - Introduced 3 netdev model and pointed out a couple of issues with that model and proposed 2 netdev model to avoid these issues. - Removed broadcast/multicast optimization and only use virtio as backup path when VF is unplugged. v2 RFC: - Changed VIRTIO_NET_F_MASTER to VIRTIO_NET_F_BACKUP (mst) - made a small change to the virtio-net xmit path to only use VF datapath for unicasts. Broadcasts/multicasts use virtio datapath. This avoids east-west broadcasts to go over the PCI link. - added suppport for the feature bit in qemu Sridhar Samudrala (4): virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit net: Introduce generic bypass module virtio_net: Extend virtio to use VF datapath when available netvsc: refactor notifier/event handling code to use the bypass framework drivers/net/Kconfig | 1 + drivers/net/hyperv/Kconfig | 1 + drivers/net/hyperv/netvsc_drv.c | 219 ++++---------- drivers/net/virtio_net.c | 614 +++++++++++++++++++++++++++++++++++++++- include/net/bypass.h | 80 ++++++ include/uapi/linux/virtio_net.h | 3 + net/Kconfig | 18 ++ net/core/Makefile | 1 + net/core/bypass.c | 406 ++++++++++++++++++++++++++ 9 files changed, 1184 insertions(+), 159 deletions(-) create mode 100644 include/net/bypass.h create mode 100644 net/core/bypass.c -- 2.14.3
Sridhar Samudrala
2018-Apr-05 21:08 UTC
[RFC PATCH net-next v5 1/4] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
This feature bit can be used by hypervisor to indicate virtio_net device to act as a backup for another device with the same MAC address. VIRTIO_NET_F_BACKUP is defined as bit 62 as it is a device feature bit. Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> --- drivers/net/virtio_net.c | 2 +- include/uapi/linux/virtio_net.h | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7b187ec7411e..befb5944f3fd 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2962,7 +2962,7 @@ static struct virtio_device_id id_table[] = { VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \ VIRTIO_NET_F_CTRL_MAC_ADDR, \ VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \ - VIRTIO_NET_F_SPEED_DUPLEX + VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_BACKUP static unsigned int features[] = { VIRTNET_FEATURES, diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index 5de6ed37695b..c7c35fd1a5ed 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -57,6 +57,9 @@ * Steering */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +#define VIRTIO_NET_F_BACKUP 62 /* Act as backup for another device + * with the same MAC. + */ #define VIRTIO_NET_F_SPEED_DUPLEX 63 /* Device set linkspeed and duplex */ #ifndef VIRTIO_NET_NO_LEGACY -- 2.14.3
Sridhar Samudrala
2018-Apr-05 21:08 UTC
[RFC PATCH net-next v5 2/4] net: Introduce generic bypass module
This provides a generic interface for paravirtual drivers to listen for netdev register/unregister/link change events from pci ethernet devices with the same MAC and takeover their datapath. The notifier and event handling code is based on the existing netvsc implementation. A paravirtual driver can use this module by registering a set of ops and each instance of the device when it is probed. Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> --- include/net/bypass.h | 80 ++++++++++ net/Kconfig | 18 +++ net/core/Makefile | 1 + net/core/bypass.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 505 insertions(+) create mode 100644 include/net/bypass.h create mode 100644 net/core/bypass.c diff --git a/include/net/bypass.h b/include/net/bypass.h new file mode 100644 index 000000000000..e2dd122f951a --- /dev/null +++ b/include/net/bypass.h @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018, Intel Corporation. */ + +#ifndef _NET_BYPASS_H +#define _NET_BYPASS_H + +#include <linux/netdevice.h> + +struct bypass_ops { + int (*register_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*join_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*unregister_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*release_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*update_link)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + rx_handler_result_t (*handle_frame)(struct sk_buff **pskb); +}; + +struct bypass_instance { + struct list_head list; + struct net_device __rcu *bypass_netdev; + struct bypass *bypass; +}; + +struct bypass { + struct list_head list; + const struct bypass_ops *ops; + const struct net_device_ops *netdev_ops; + struct list_head instance_list; + struct mutex lock; +}; + +#if IS_ENABLED(CONFIG_NET_BYPASS) + +struct bypass *bypass_register_driver(const struct bypass_ops *ops, + const struct net_device_ops *netdev_ops); +void bypass_unregister_driver(struct bypass *bypass); + +int bypass_register_instance(struct bypass *bypass, struct net_device *dev); +int bypass_unregister_instance(struct bypass *bypass, struct net_device *dev); + +int bypass_unregister_child(struct net_device *child_netdev); + +#else + +static inline +struct bypass *bypass_register_driver(const struct bypass_ops *ops, + const struct net_device_ops *netdev_ops) +{ + return NULL; +} + +static inline void bypass_unregister_driver(struct bypass *bypass) +{ +} + +static inline int bypass_register_instance(struct bypass *bypass, + struct net_device *dev) +{ + return 0; +} + +static inline int bypass_unregister_instance(struct bypass *bypass, + struct net_device *dev) +{ + return 0; +} + +static inline int bypass_unregister_child(struct net_device *child_netdev) +{ + return 0; +} + +#endif + +#endif /* _NET_BYPASS_H */ diff --git a/net/Kconfig b/net/Kconfig index 0428f12c25c2..994445f4a96a 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -423,6 +423,24 @@ config MAY_USE_DEVLINK on MAY_USE_DEVLINK to ensure they do not cause link errors when devlink is a loadable module and the driver using it is built-in. +config NET_BYPASS + tristate "Bypass interface" + ---help--- + This provides a generic interface for paravirtual drivers to listen + for netdev register/unregister/link change events from pci ethernet + devices with the same MAC and takeover their datapath. This also + enables live migration of a VM with direct attached VF by failing + over to the paravirtual datapath when the VF is unplugged. + +config MAY_USE_BYPASS + tristate + default m if NET_BYPASS=m + default y if NET_BYPASS=y || NET_BYPASS=n + help + Drivers using the bypass infrastructure should have a dependency + on MAY_USE_BYPASS to ensure they do not cause link errors when + bypass is a loadable module and the driver using it is built-in. + endif # if NET # Used by archs to tell that they support BPF JIT compiler plus which flavour. diff --git a/net/core/Makefile b/net/core/Makefile index 6dbbba8c57ae..a9727ed1c8fc 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -30,3 +30,4 @@ obj-$(CONFIG_DST_CACHE) += dst_cache.o obj-$(CONFIG_HWBM) += hwbm.o obj-$(CONFIG_NET_DEVLINK) += devlink.o obj-$(CONFIG_GRO_CELLS) += gro_cells.o +obj-$(CONFIG_NET_BYPASS) += bypass.o diff --git a/net/core/bypass.c b/net/core/bypass.c new file mode 100644 index 000000000000..7bde962ec3d4 --- /dev/null +++ b/net/core/bypass.c @@ -0,0 +1,406 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018, Intel Corporation. */ + +/* A common module to handle registrations and notifications for paravirtual + * drivers to enable accelerated datapath and support VF live migration. + * + * The notifier and event handling code is based on netvsc driver. + */ + +#include <linux/netdevice.h> +#include <linux/etherdevice.h> +#include <linux/ethtool.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/netdevice.h> +#include <linux/netpoll.h> +#include <linux/rtnetlink.h> +#include <linux/if_vlan.h> +#include <net/sch_generic.h> +#include <uapi/linux/if_arp.h> +#include <net/bypass.h> + +static LIST_HEAD(bypass_list); + +static DEFINE_MUTEX(bypass_mutex); + +struct bypass_instance *bypass_instance_alloc(struct net_device *dev) +{ + struct bypass_instance *bypass_instance; + + bypass_instance = kzalloc(sizeof(*bypass_instance), GFP_KERNEL); + if (!bypass_instance) + return NULL; + + dev_hold(dev); + rcu_assign_pointer(bypass_instance->bypass_netdev, dev); + + return bypass_instance; +} + +void bypass_instance_free(struct bypass_instance *bypass_instance) +{ + struct net_device *bypass_netdev; + + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + + dev_put(bypass_netdev); + kfree(bypass_instance); +} + +static struct bypass_instance *bypass_get_instance_bymac(u8 *mac) +{ + struct bypass_instance *bypass_instance; + struct net_device *bypass_netdev; + struct bypass *bypass; + + list_for_each_entry(bypass, &bypass_list, list) { + mutex_lock(&bypass->lock); + list_for_each_entry(bypass_instance, &bypass->instance_list, + list) { + bypass_netdev + rcu_dereference(bypass_instance->bypass_netdev); + if (ether_addr_equal(bypass_netdev->perm_addr, mac)) { + mutex_unlock(&bypass->lock); + goto out; + } + } + mutex_unlock(&bypass->lock); + } + + bypass_instance = NULL; +out: + return bypass_instance; +} + +static int bypass_register_child(struct net_device *child_netdev) +{ + struct bypass_instance *bypass_instance; + struct bypass *bypass; + struct net_device *bypass_netdev; + int ret, orig_mtu; + + ASSERT_RTNL(); + + mutex_lock(&bypass_mutex); + bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr); + if (!bypass_instance) { + mutex_unlock(&bypass_mutex); + goto done; + } + + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + bypass = bypass_instance->bypass; + mutex_unlock(&bypass_mutex); + + if (!bypass->ops->register_child) + goto done; + + ret = bypass->ops->register_child(bypass_netdev, child_netdev); + if (ret != 0) + goto done; + + ret = netdev_rx_handler_register(child_netdev, + bypass->ops->handle_frame, + bypass_netdev); + if (ret != 0) { + netdev_err(child_netdev, + "can not register bypass rx handler (err = %d)\n", + ret); + goto rx_handler_failed; + } + + ret = netdev_upper_dev_link(child_netdev, bypass_netdev, NULL); + if (ret != 0) { + netdev_err(child_netdev, + "can not set master device %s (err = %d)\n", + bypass_netdev->name, ret); + goto upper_link_failed; + } + + child_netdev->flags |= IFF_SLAVE; + + if (netif_running(bypass_netdev)) { + ret = dev_open(child_netdev); + if (ret && (ret != -EBUSY)) { + netdev_err(bypass_netdev, + "Opening child %s failed ret:%d\n", + child_netdev->name, ret); + goto err_interface_up; + } + } + + /* Align MTU of child with master */ + orig_mtu = child_netdev->mtu; + ret = dev_set_mtu(child_netdev, bypass_netdev->mtu); + if (ret != 0) { + netdev_err(bypass_netdev, + "unable to change mtu of %s to %u register failed\n", + child_netdev->name, bypass_netdev->mtu); + goto err_set_mtu; + } + + ret = bypass->ops->join_child(bypass_netdev, child_netdev); + if (ret != 0) + goto err_join; + + call_netdevice_notifiers(NETDEV_JOIN, child_netdev); + + goto done; + +err_join: + dev_set_mtu(child_netdev, orig_mtu); +err_set_mtu: + dev_close(child_netdev); +err_interface_up: + netdev_upper_dev_unlink(child_netdev, bypass_netdev); + child_netdev->flags &= ~IFF_SLAVE; +upper_link_failed: + netdev_rx_handler_unregister(child_netdev); +rx_handler_failed: + bypass->ops->unregister_child(bypass_netdev, child_netdev); + +done: + return NOTIFY_DONE; +} + +int bypass_unregister_child(struct net_device *child_netdev) +{ + struct bypass_instance *bypass_instance; + struct net_device *bypass_netdev; + struct bypass *bypass; + int ret; + + ASSERT_RTNL(); + + mutex_lock(&bypass_mutex); + bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr); + if (!bypass_instance) { + mutex_unlock(&bypass_mutex); + goto done; + } + + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + bypass = bypass_instance->bypass; + mutex_unlock(&bypass_mutex); + + ret = bypass->ops->release_child(bypass_netdev, child_netdev); + if (ret != 0) + goto done; + + netdev_rx_handler_unregister(child_netdev); + netdev_upper_dev_unlink(child_netdev, bypass_netdev); + child_netdev->flags &= ~IFF_SLAVE; + + if (!bypass->ops->unregister_child) + goto done; + + bypass->ops->unregister_child(bypass_netdev, child_netdev); + +done: + return NOTIFY_DONE; +} +EXPORT_SYMBOL(bypass_unregister_child); + +static int bypass_update_link(struct net_device *child_netdev) +{ + struct bypass_instance *bypass_instance; + struct net_device *bypass_netdev; + struct bypass *bypass; + + ASSERT_RTNL(); + + mutex_lock(&bypass_mutex); + bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr); + if (!bypass_instance) { + mutex_unlock(&bypass_mutex); + goto done; + } + + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + bypass = bypass_instance->bypass; + mutex_unlock(&bypass_mutex); + + if (!bypass->ops->update_link) + goto done; + + bypass->ops->update_link(bypass_netdev, child_netdev); + +done: + return NOTIFY_DONE; +} + +static bool bypass_validate_child_dev(struct net_device *dev) +{ + /* Avoid non-Ethernet type devices */ + if (dev->type != ARPHRD_ETHER) + return false; + + /* Avoid Vlan dev with same MAC registering as VF */ + if (is_vlan_dev(dev)) + return false; + + /* Avoid Bonding master dev with same MAC registering as child dev */ + if ((dev->priv_flags & IFF_BONDING) && (dev->flags & IFF_MASTER)) + return false; + + return true; +} + +static int +bypass_event(struct notifier_block *this, unsigned long event, void *ptr) +{ + struct net_device *event_dev = netdev_notifier_info_to_dev(ptr); + struct bypass *bypass; + + /* Skip Parent events */ + mutex_lock(&bypass_mutex); + list_for_each_entry(bypass, &bypass_list, list) { + if (event_dev->netdev_ops == bypass->netdev_ops) { + mutex_unlock(&bypass_mutex); + return NOTIFY_DONE; + } + } + mutex_unlock(&bypass_mutex); + + if (!bypass_validate_child_dev(event_dev)) + return NOTIFY_DONE; + + switch (event) { + case NETDEV_REGISTER: + return bypass_register_child(event_dev); + case NETDEV_UNREGISTER: + return bypass_unregister_child(event_dev); + case NETDEV_UP: + case NETDEV_DOWN: + case NETDEV_CHANGE: + return bypass_update_link(event_dev); + default: + return NOTIFY_DONE; + } +} + +static struct notifier_block bypass_notifier = { + .notifier_call = bypass_event, +}; + +static void bypass_register_existing_child(struct net_device *bypass_netdev) +{ + struct net *net = dev_net(bypass_netdev); + struct net_device *dev; + + rtnl_lock(); + for_each_netdev(net, dev) { + if (dev == bypass_netdev) + continue; + if (!bypass_validate_child_dev(dev)) + continue; + if (ether_addr_equal(bypass_netdev->perm_addr, dev->perm_addr)) + bypass_register_child(dev); + } + rtnl_unlock(); +} + +int bypass_register_instance(struct bypass *bypass, struct net_device *dev) +{ + struct bypass_instance *bypass_instance; + struct net_device *bypass_netdev; + int ret = 0; + + mutex_lock(&bypass->lock); + list_for_each_entry(bypass_instance, &bypass->instance_list, list) { + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + if (bypass_netdev == dev) { + ret = -EEXIST; + goto done; + } + } + + bypass_instance = bypass_instance_alloc(dev); + if (!bypass_instance) { + ret = -ENOMEM; + goto done; + } + + bypass_instance->bypass = bypass; + list_add_tail(&bypass_instance->list, &bypass->instance_list); + +done: + mutex_unlock(&bypass->lock); + bypass_register_existing_child(dev); + return ret; +} +EXPORT_SYMBOL(bypass_register_instance); + +int bypass_unregister_instance(struct bypass *bypass, struct net_device *dev) +{ + struct bypass_instance *bypass_instance; + struct net_device *bypass_netdev; + int ret = 0; + + mutex_lock(&bypass->lock); + list_for_each_entry(bypass_instance, &bypass->instance_list, list) { + bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); + if (bypass_netdev == dev) { + list_del(&bypass_instance->list); + bypass_instance_free(bypass_instance); + goto done; + } + } + + ret = -ENOENT; +done: + mutex_unlock(&bypass->lock); + return ret; +} +EXPORT_SYMBOL(bypass_unregister_instance); + +struct bypass *bypass_register_driver(const struct bypass_ops *ops, + const struct net_device_ops *netdev_ops) +{ + struct bypass *bypass; + + bypass = kzalloc(sizeof(*bypass), GFP_KERNEL); + if (!bypass) + return NULL; + + bypass->ops = ops; + bypass->netdev_ops = netdev_ops; + INIT_LIST_HEAD(&bypass->instance_list); + + mutex_lock(&bypass_mutex); + list_add_tail(&bypass->list, &bypass_list); + mutex_unlock(&bypass_mutex); + + return bypass; +} +EXPORT_SYMBOL_GPL(bypass_register_driver); + +void bypass_unregister_driver(struct bypass *bypass) +{ + mutex_lock(&bypass_mutex); + list_del(&bypass->list); + mutex_unlock(&bypass_mutex); + + kfree(bypass); +} +EXPORT_SYMBOL_GPL(bypass_unregister_driver); + +static __init int +bypass_init(void) +{ + register_netdevice_notifier(&bypass_notifier); + + return 0; +} +module_init(bypass_init); + +static __exit +void bypass_exit(void) +{ + unregister_netdevice_notifier(&bypass_notifier); +} +module_exit(bypass_exit); + +MODULE_DESCRIPTION("Bypass infrastructure/interface for Paravirtual drivers"); +MODULE_LICENSE("GPL v2"); -- 2.14.3
Sridhar Samudrala
2018-Apr-05 21:08 UTC
[RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
This patch enables virtio_net to switch over to a VF datapath when a VF netdev is present with the same MAC address. It allows live migration of a VM with a direct attached VF without the need to setup a bond/team between a VF and virtio net device in the guest. The hypervisor needs to enable only one datapath at any time so that packets don't get looped back to the VM over the other datapath. When a VF is plugged, the virtio datapath link state can be marked as down. The hypervisor needs to unplug the VF device from the guest on the source host and reset the MAC filter of the VF to initiate failover of datapath to virtio before starting the migration. After the migration is completed, the destination hypervisor sets the MAC filter on the VF and plugs it back to the guest to switch over to VF datapath. When BACKUP feature is enabled, an additional netdev(bypass netdev) is created that acts as a master device and tracks the state of the 2 lower netdevs. The original virtio_net netdev is marked as 'backup' netdev and a passthru device with the same MAC is registered as 'active' netdev. This patch is based on the discussion initiated by Jesse on this thread. https://marc.info/?l=linux-virtualization&m=151189725224231&w=2 Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> --- drivers/net/Kconfig | 1 + drivers/net/virtio_net.c | 612 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 612 insertions(+), 1 deletion(-) diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 891846655000..9e2cf61fd1c1 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -331,6 +331,7 @@ config VETH config VIRTIO_NET tristate "Virtio network driver" depends on VIRTIO + depends on MAY_USE_BYPASS ---help--- This is the virtual network driver for virtio. It can be used with QEMU based VMMs (like KVM or Xen). Say Y or M. diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index befb5944f3fd..86b2f8f2947d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -30,8 +30,11 @@ #include <linux/cpu.h> #include <linux/average.h> #include <linux/filter.h> +#include <linux/netdevice.h> +#include <linux/pci.h> #include <net/route.h> #include <net/xdp.h> +#include <net/bypass.h> static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -206,6 +209,9 @@ struct virtnet_info { u32 speed; unsigned long guest_offloads; + + /* upper netdev created when BACKUP feature enabled */ + struct net_device __rcu *bypass_netdev; }; struct padded_vnet_hdr { @@ -2275,6 +2281,22 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } +static int virtnet_get_phys_port_name(struct net_device *dev, char *buf, + size_t len) +{ + struct virtnet_info *vi = netdev_priv(dev); + int ret; + + if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_BACKUP)) + return -EOPNOTSUPP; + + ret = snprintf(buf, len, "_bkup"); + if (ret >= len) + return -EOPNOTSUPP; + + return 0; +} + static const struct net_device_ops virtnet_netdev = { .ndo_open = virtnet_open, .ndo_stop = virtnet_close, @@ -2292,6 +2314,7 @@ static const struct net_device_ops virtnet_netdev = { .ndo_xdp_xmit = virtnet_xdp_xmit, .ndo_xdp_flush = virtnet_xdp_flush, .ndo_features_check = passthru_features_check, + .ndo_get_phys_port_name = virtnet_get_phys_port_name, }; static void virtnet_config_changed_work(struct work_struct *work) @@ -2689,6 +2712,576 @@ static int virtnet_validate(struct virtio_device *vdev) return 0; } +/* START of functions supporting VIRTIO_NET_F_BACKUP feature. + * When BACKUP feature is enabled, an additional netdev(bypass netdev) + * is created that acts as a master device and tracks the state of the + * 2 lower netdevs. The original virtio_net netdev is registered as + * 'backup' netdev and a passthru device with the same MAC is registered + * as 'active' netdev. + */ + +/* bypass state maintained when BACKUP feature is enabled */ +struct virtnet_bypass_info { + /* passthru netdev with same MAC */ + struct net_device __rcu *active_netdev; + + /* virtio_net netdev */ + struct net_device __rcu *backup_netdev; + + /* active netdev stats */ + struct rtnl_link_stats64 active_stats; + + /* backup netdev stats */ + struct rtnl_link_stats64 backup_stats; + + /* aggregated stats */ + struct rtnl_link_stats64 bypass_stats; + + /* spinlock while updating stats */ + spinlock_t stats_lock; +}; + +static int virtnet_bypass_open(struct net_device *dev) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *active_netdev, *backup_netdev; + int err; + + netif_carrier_off(dev); + netif_tx_wake_all_queues(dev); + + active_netdev = rtnl_dereference(vbi->active_netdev); + if (active_netdev) { + err = dev_open(active_netdev); + if (err) + goto err_active_open; + } + + backup_netdev = rtnl_dereference(vbi->backup_netdev); + if (backup_netdev) { + err = dev_open(backup_netdev); + if (err) + goto err_backup_open; + } + + return 0; + +err_backup_open: + dev_close(active_netdev); +err_active_open: + netif_tx_disable(dev); + return err; +} + +static int virtnet_bypass_close(struct net_device *dev) +{ + struct virtnet_bypass_info *vi = netdev_priv(dev); + struct net_device *child_netdev; + + netif_tx_disable(dev); + + child_netdev = rtnl_dereference(vi->active_netdev); + if (child_netdev) + dev_close(child_netdev); + + child_netdev = rtnl_dereference(vi->backup_netdev); + if (child_netdev) + dev_close(child_netdev); + + return 0; +} + +static netdev_tx_t virtnet_bypass_drop_xmit(struct sk_buff *skb, + struct net_device *dev) +{ + atomic_long_inc(&dev->tx_dropped); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +static bool virtnet_bypass_xmit_ready(struct net_device *dev) +{ + return netif_running(dev) && netif_carrier_ok(dev); +} + +static netdev_tx_t virtnet_bypass_start_xmit(struct sk_buff *skb, + struct net_device *dev) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *xmit_dev; + + /* Try xmit via active netdev followed by backup netdev */ + xmit_dev = rcu_dereference_bh(vbi->active_netdev); + if (!xmit_dev || !virtnet_bypass_xmit_ready(xmit_dev)) { + xmit_dev = rcu_dereference_bh(vbi->backup_netdev); + if (!xmit_dev || !virtnet_bypass_xmit_ready(xmit_dev)) + return virtnet_bypass_drop_xmit(skb, dev); + } + + skb->dev = xmit_dev; + skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping; + + return dev_queue_xmit(skb); +} + +static u16 virtnet_bypass_select_queue(struct net_device *dev, + struct sk_buff *skb, void *accel_priv, + select_queue_fallback_t fallback) +{ + /* This helper function exists to help dev_pick_tx get the correct + * destination queue. Using a helper function skips a call to + * skb_tx_hash and will put the skbs in the queue we expect on their + * way down to the bonding driver. + */ + u16 txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0; + + /* Save the original txq to restore before passing to the driver */ + qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping; + + if (unlikely(txq >= dev->real_num_tx_queues)) { + do { + txq -= dev->real_num_tx_queues; + } while (txq >= dev->real_num_tx_queues); + } + + return txq; +} + +/* fold stats, assuming all rtnl_link_stats64 fields are u64, but + * that some drivers can provide 32bit values only. + */ +static void virtnet_bypass_fold_stats(struct rtnl_link_stats64 *_res, + const struct rtnl_link_stats64 *_new, + const struct rtnl_link_stats64 *_old) +{ + const u64 *new = (const u64 *)_new; + const u64 *old = (const u64 *)_old; + u64 *res = (u64 *)_res; + int i; + + for (i = 0; i < sizeof(*_res) / sizeof(u64); i++) { + u64 nv = new[i]; + u64 ov = old[i]; + s64 delta = nv - ov; + + /* detects if this particular field is 32bit only */ + if (((nv | ov) >> 32) == 0) + delta = (s64)(s32)((u32)nv - (u32)ov); + + /* filter anomalies, some drivers reset their stats + * at down/up events. + */ + if (delta > 0) + res[i] += delta; + } +} + +static void virtnet_bypass_get_stats(struct net_device *dev, + struct rtnl_link_stats64 *stats) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + const struct rtnl_link_stats64 *new; + struct rtnl_link_stats64 temp; + struct net_device *child_netdev; + + spin_lock(&vbi->stats_lock); + memcpy(stats, &vbi->bypass_stats, sizeof(*stats)); + + rcu_read_lock(); + + child_netdev = rcu_dereference(vbi->active_netdev); + if (child_netdev) { + new = dev_get_stats(child_netdev, &temp); + virtnet_bypass_fold_stats(stats, new, &vbi->active_stats); + memcpy(&vbi->active_stats, new, sizeof(*new)); + } + + child_netdev = rcu_dereference(vbi->backup_netdev); + if (child_netdev) { + new = dev_get_stats(child_netdev, &temp); + virtnet_bypass_fold_stats(stats, new, &vbi->backup_stats); + memcpy(&vbi->backup_stats, new, sizeof(*new)); + } + + rcu_read_unlock(); + + memcpy(&vbi->bypass_stats, stats, sizeof(*stats)); + spin_unlock(&vbi->stats_lock); +} + +static int virtnet_bypass_change_mtu(struct net_device *dev, int new_mtu) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *active_netdev, *backup_netdev; + int ret = 0; + + active_netdev = rcu_dereference(vbi->active_netdev); + if (active_netdev) { + ret = dev_set_mtu(active_netdev, new_mtu); + if (ret) + return ret; + } + + backup_netdev = rcu_dereference(vbi->backup_netdev); + if (backup_netdev) { + ret = dev_set_mtu(backup_netdev, new_mtu); + if (ret) { + dev_set_mtu(active_netdev, dev->mtu); + return ret; + } + } + + dev->mtu = new_mtu; + return 0; +} + +static void virtnet_bypass_set_rx_mode(struct net_device *dev) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *child_netdev; + + rcu_read_lock(); + + child_netdev = rcu_dereference(vbi->active_netdev); + if (child_netdev) { + dev_uc_sync_multiple(child_netdev, dev); + dev_mc_sync_multiple(child_netdev, dev); + } + + child_netdev = rcu_dereference(vbi->backup_netdev); + if (child_netdev) { + dev_uc_sync_multiple(child_netdev, dev); + dev_mc_sync_multiple(child_netdev, dev); + } + + rcu_read_unlock(); +} + +static const struct net_device_ops virtnet_bypass_netdev_ops = { + .ndo_open = virtnet_bypass_open, + .ndo_stop = virtnet_bypass_close, + .ndo_start_xmit = virtnet_bypass_start_xmit, + .ndo_select_queue = virtnet_bypass_select_queue, + .ndo_get_stats64 = virtnet_bypass_get_stats, + .ndo_change_mtu = virtnet_bypass_change_mtu, + .ndo_set_rx_mode = virtnet_bypass_set_rx_mode, + .ndo_validate_addr = eth_validate_addr, + .ndo_features_check = passthru_features_check, +}; + +static int +virtnet_bypass_ethtool_get_link_ksettings(struct net_device *dev, + struct ethtool_link_ksettings *cmd) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *child_netdev; + + child_netdev = rtnl_dereference(vbi->active_netdev); + if (!child_netdev || !virtnet_bypass_xmit_ready(child_netdev)) { + child_netdev = rtnl_dereference(vbi->backup_netdev); + if (!child_netdev || !virtnet_bypass_xmit_ready(child_netdev)) { + cmd->base.duplex = DUPLEX_UNKNOWN; + cmd->base.port = PORT_OTHER; + cmd->base.speed = SPEED_UNKNOWN; + + return 0; + } + } + + return __ethtool_get_link_ksettings(child_netdev, cmd); +} + +#define BYPASS_DRV_NAME "virtnet_bypass" +#define BYPASS_DRV_VERSION "0.1" + +static void virtnet_bypass_ethtool_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *drvinfo) +{ + strlcpy(drvinfo->driver, BYPASS_DRV_NAME, sizeof(drvinfo->driver)); + strlcpy(drvinfo->version, BYPASS_DRV_VERSION, sizeof(drvinfo->version)); +} + +static const struct ethtool_ops virtnet_bypass_ethtool_ops = { + .get_drvinfo = virtnet_bypass_ethtool_get_drvinfo, + .get_link = ethtool_op_get_link, + .get_link_ksettings = virtnet_bypass_ethtool_get_link_ksettings, +}; + +static int virtnet_bypass_join_child(struct net_device *bypass_netdev, + struct net_device *child_netdev) +{ + struct virtnet_bypass_info *vbi; + bool backup; + + vbi = netdev_priv(bypass_netdev); + backup = (child_netdev->dev.parent == bypass_netdev->dev.parent); + if (backup ? rtnl_dereference(vbi->backup_netdev) : + rtnl_dereference(vbi->active_netdev)) { + netdev_info(bypass_netdev, + "%s attempting to join bypass dev when %s already present\n", + child_netdev->name, backup ? "backup" : "active"); + return -EEXIST; + } + + dev_hold(child_netdev); + + if (backup) { + rcu_assign_pointer(vbi->backup_netdev, child_netdev); + dev_get_stats(vbi->backup_netdev, &vbi->backup_stats); + } else { + rcu_assign_pointer(vbi->active_netdev, child_netdev); + dev_get_stats(vbi->active_netdev, &vbi->active_stats); + bypass_netdev->min_mtu = child_netdev->min_mtu; + bypass_netdev->max_mtu = child_netdev->max_mtu; + } + + netdev_info(bypass_netdev, "child:%s joined\n", child_netdev->name); + + return 0; +} + +static int virtnet_bypass_register_child(struct net_device *bypass_netdev, + struct net_device *child_netdev) +{ + struct virtnet_bypass_info *vbi; + bool backup; + + vbi = netdev_priv(bypass_netdev); + backup = (child_netdev->dev.parent == bypass_netdev->dev.parent); + if (backup ? rtnl_dereference(vbi->backup_netdev) : + rtnl_dereference(vbi->active_netdev)) { + netdev_info(bypass_netdev, + "%s attempting to register bypass dev when %s already present\n", + child_netdev->name, backup ? "backup" : "active"); + return -EEXIST; + } + + /* Avoid non pci devices as active netdev */ + if (!backup && (!child_netdev->dev.parent || + !dev_is_pci(child_netdev->dev.parent))) + return -EINVAL; + + netdev_info(bypass_netdev, "child:%s registered\n", child_netdev->name); + + return 0; +} + +static int virtnet_bypass_release_child(struct net_device *bypass_netdev, + struct net_device *child_netdev) +{ + struct net_device *backup_netdev, *active_netdev; + struct virtnet_bypass_info *vbi; + + vbi = netdev_priv(bypass_netdev); + active_netdev = rtnl_dereference(vbi->active_netdev); + backup_netdev = rtnl_dereference(vbi->backup_netdev); + + if (child_netdev != active_netdev && child_netdev != backup_netdev) + return -EINVAL; + + netdev_info(bypass_netdev, "child:%s released\n", child_netdev->name); + + return 0; +} + +static int virtnet_bypass_unregister_child(struct net_device *bypass_netdev, + struct net_device *child_netdev) +{ + struct net_device *backup_netdev, *active_netdev; + struct virtnet_bypass_info *vbi; + + vbi = netdev_priv(bypass_netdev); + active_netdev = rtnl_dereference(vbi->active_netdev); + backup_netdev = rtnl_dereference(vbi->backup_netdev); + + if (child_netdev != active_netdev && child_netdev != backup_netdev) + return -EINVAL; + + if (child_netdev == backup_netdev) { + RCU_INIT_POINTER(vbi->backup_netdev, NULL); + } else { + RCU_INIT_POINTER(vbi->active_netdev, NULL); + if (backup_netdev) { + bypass_netdev->min_mtu = backup_netdev->min_mtu; + bypass_netdev->max_mtu = backup_netdev->max_mtu; + } + } + + dev_put(child_netdev); + + netdev_info(bypass_netdev, "child:%s unregistered\n", + child_netdev->name); + + return 0; +} + +static int virtnet_bypass_update_link(struct net_device *bypass_netdev, + struct net_device *child_netdev) +{ + struct net_device *active_netdev, *backup_netdev; + struct virtnet_bypass_info *vbi; + + if (!netif_running(bypass_netdev)) + return 0; + + vbi = netdev_priv(bypass_netdev); + + active_netdev = rtnl_dereference(vbi->active_netdev); + backup_netdev = rtnl_dereference(vbi->backup_netdev); + + if (child_netdev != active_netdev && child_netdev != backup_netdev) + return -EINVAL; + + if ((active_netdev && virtnet_bypass_xmit_ready(active_netdev)) || + (backup_netdev && virtnet_bypass_xmit_ready(backup_netdev))) { + netif_carrier_on(bypass_netdev); + netif_tx_wake_all_queues(bypass_netdev); + } else { + netif_carrier_off(bypass_netdev); + netif_tx_stop_all_queues(bypass_netdev); + } + + return 0; +} + +/* Called when child dev is injecting data into network stack. + * Change the associated network device from lower dev to virtio. + * note: already called with rcu_read_lock + */ +static rx_handler_result_t virtnet_bypass_handle_frame(struct sk_buff **pskb) +{ + struct sk_buff *skb = *pskb; + struct net_device *ndev = rcu_dereference(skb->dev->rx_handler_data); + + skb->dev = ndev; + + return RX_HANDLER_ANOTHER; +} + +static const struct bypass_ops virtnet_bypass_ops = { + .register_child = virtnet_bypass_register_child, + .join_child = virtnet_bypass_join_child, + .unregister_child = virtnet_bypass_unregister_child, + .release_child = virtnet_bypass_release_child, + .update_link = virtnet_bypass_update_link, + .handle_frame = virtnet_bypass_handle_frame, +}; + +static struct bypass *virtnet_bypass; + +static int virtnet_bypass_create(struct virtnet_info *vi) +{ + struct net_device *backup_netdev = vi->dev; + struct device *dev = &vi->vdev->dev; + struct net_device *bypass_netdev; + int res; + + /* Alloc at least 2 queues, for now we are going with 16 assuming + * that most devices being bonded won't have too many queues. + */ + bypass_netdev = alloc_etherdev_mq(sizeof(struct virtnet_bypass_info), + 16); + if (!bypass_netdev) { + dev_err(dev, "Unable to allocate bypass_netdev!\n"); + return -ENOMEM; + } + + dev_net_set(bypass_netdev, dev_net(backup_netdev)); + SET_NETDEV_DEV(bypass_netdev, dev); + + bypass_netdev->netdev_ops = &virtnet_bypass_netdev_ops; + bypass_netdev->ethtool_ops = &virtnet_bypass_ethtool_ops; + + /* Initialize the device options */ + bypass_netdev->flags |= IFF_MASTER; + bypass_netdev->priv_flags |= IFF_UNICAST_FLT | IFF_NO_QUEUE; + bypass_netdev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | + IFF_TX_SKB_SHARING); + + /* don't acquire bypass netdev's netif_tx_lock when transmitting */ + bypass_netdev->features |= NETIF_F_LLTX; + + /* Don't allow bypass devices to change network namespaces. */ + bypass_netdev->features |= NETIF_F_NETNS_LOCAL; + + bypass_netdev->hw_features = NETIF_F_HW_CSUM | NETIF_F_SG | + NETIF_F_FRAGLIST | NETIF_F_ALL_TSO | + NETIF_F_HIGHDMA | NETIF_F_LRO; + + bypass_netdev->hw_features |= NETIF_F_GSO_ENCAP_ALL; + bypass_netdev->features |= bypass_netdev->hw_features; + + /* For now treat bypass netdev as VLAN challenged since we + * cannot assume VLAN functionality with a VF + */ + bypass_netdev->features |= NETIF_F_VLAN_CHALLENGED; + + memcpy(bypass_netdev->dev_addr, backup_netdev->dev_addr, + bypass_netdev->addr_len); + + bypass_netdev->min_mtu = backup_netdev->min_mtu; + bypass_netdev->max_mtu = backup_netdev->max_mtu; + + res = register_netdev(bypass_netdev); + if (res < 0) { + dev_err(dev, "Unable to register bypass_netdev!\n"); + goto err_register_netdev; + } + + netif_carrier_off(bypass_netdev); + + res = bypass_register_instance(virtnet_bypass, bypass_netdev); + if (res < 0) + goto err_bypass; + + rcu_assign_pointer(vi->bypass_netdev, bypass_netdev); + + return 0; + +err_bypass: + unregister_netdev(bypass_netdev); +err_register_netdev: + free_netdev(bypass_netdev); + + return res; +} + +static void virtnet_bypass_destroy(struct virtnet_info *vi) +{ + struct net_device *bypass_netdev; + struct virtnet_bypass_info *vbi; + struct net_device *child_netdev; + + bypass_netdev = rcu_dereference(vi->bypass_netdev); + /* no device found, nothing to free */ + if (!bypass_netdev) + return; + + vbi = netdev_priv(bypass_netdev); + + netif_device_detach(bypass_netdev); + + rtnl_lock(); + + child_netdev = rtnl_dereference(vbi->active_netdev); + if (child_netdev) + bypass_unregister_child(child_netdev); + + child_netdev = rtnl_dereference(vbi->backup_netdev); + if (child_netdev) + bypass_unregister_child(child_netdev); + + unregister_netdevice(bypass_netdev); + + bypass_unregister_instance(virtnet_bypass, bypass_netdev); + + rtnl_unlock(); + + free_netdev(bypass_netdev); +} + +/* END of functions supporting VIRTIO_NET_F_BACKUP feature. */ + static int virtnet_probe(struct virtio_device *vdev) { int i, err = -ENOMEM; @@ -2839,10 +3432,15 @@ static int virtnet_probe(struct virtio_device *vdev) virtnet_init_settings(dev); + if (virtio_has_feature(vdev, VIRTIO_NET_F_BACKUP)) { + if (virtnet_bypass_create(vi) != 0) + goto free_vqs; + } + err = register_netdev(dev); if (err) { pr_debug("virtio_net: registering device failed\n"); - goto free_vqs; + goto free_bypass; } virtio_device_ready(vdev); @@ -2879,6 +3477,8 @@ static int virtnet_probe(struct virtio_device *vdev) vi->vdev->config->reset(vdev); unregister_netdev(dev); +free_bypass: + virtnet_bypass_destroy(vi); free_vqs: cancel_delayed_work_sync(&vi->refill); free_receive_page_frags(vi); @@ -2913,6 +3513,8 @@ static void virtnet_remove(struct virtio_device *vdev) unregister_netdev(vi->dev); + virtnet_bypass_destroy(vi); + remove_vq_common(vi); free_netdev(vi->dev); @@ -2996,6 +3598,11 @@ static __init int virtio_net_driver_init(void) { int ret; + virtnet_bypass = bypass_register_driver(&virtnet_bypass_ops, + &virtnet_bypass_netdev_ops); + if (!virtnet_bypass) + return -ENOMEM; + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "virtio/net:online", virtnet_cpu_online, virtnet_cpu_down_prep); @@ -3010,12 +3617,14 @@ static __init int virtio_net_driver_init(void) ret = register_virtio_driver(&virtio_net_driver); if (ret) goto err_virtio; + return 0; err_virtio: cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); err_dead: cpuhp_remove_multi_state(virtionet_online); out: + bypass_unregister_driver(virtnet_bypass); return ret; } module_init(virtio_net_driver_init); @@ -3025,6 +3634,7 @@ static __exit void virtio_net_driver_exit(void) unregister_virtio_driver(&virtio_net_driver); cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); cpuhp_remove_multi_state(virtionet_online); + bypass_unregister_driver(virtnet_bypass); } module_exit(virtio_net_driver_exit); -- 2.14.3
Sridhar Samudrala
2018-Apr-05 21:08 UTC
[RFC PATCH net-next v5 4/4] netvsc: refactor notifier/event handling code to use the bypass framework
Use the registration/notification framework supported by the generic bypass infrastructure. Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> --- drivers/net/hyperv/Kconfig | 1 + drivers/net/hyperv/netvsc_drv.c | 219 ++++++++++++---------------------------- 2 files changed, 63 insertions(+), 157 deletions(-) diff --git a/drivers/net/hyperv/Kconfig b/drivers/net/hyperv/Kconfig index 936968d23559..cc3a721baa18 100644 --- a/drivers/net/hyperv/Kconfig +++ b/drivers/net/hyperv/Kconfig @@ -1,5 +1,6 @@ config HYPERV_NET tristate "Microsoft Hyper-V virtual network driver" depends on HYPERV + depends on MAY_USE_BYPASS help Select this option to enable the Hyper-V virtual network driver. diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index ecc84954c511..b53a2be99bd2 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -43,6 +43,7 @@ #include <net/pkt_sched.h> #include <net/checksum.h> #include <net/ip6_checksum.h> +#include <net/bypass.h> #include "hyperv_net.h" @@ -1763,46 +1764,6 @@ static void netvsc_link_change(struct work_struct *w) rtnl_unlock(); } -static struct net_device *get_netvsc_bymac(const u8 *mac) -{ - struct net_device *dev; - - ASSERT_RTNL(); - - for_each_netdev(&init_net, dev) { - if (dev->netdev_ops != &device_ops) - continue; /* not a netvsc device */ - - if (ether_addr_equal(mac, dev->perm_addr)) - return dev; - } - - return NULL; -} - -static struct net_device *get_netvsc_byref(struct net_device *vf_netdev) -{ - struct net_device *dev; - - ASSERT_RTNL(); - - for_each_netdev(&init_net, dev) { - struct net_device_context *net_device_ctx; - - if (dev->netdev_ops != &device_ops) - continue; /* not a netvsc device */ - - net_device_ctx = netdev_priv(dev); - if (!rtnl_dereference(net_device_ctx->nvdev)) - continue; /* device is removed */ - - if (rtnl_dereference(net_device_ctx->vf_netdev) == vf_netdev) - return dev; /* a match */ - } - - return NULL; -} - /* Called when VF is injecting data into network stack. * Change the associated network device from VF to netvsc. * note: already called with rcu_read_lock @@ -1825,43 +1786,19 @@ static rx_handler_result_t netvsc_vf_handle_frame(struct sk_buff **pskb) return RX_HANDLER_ANOTHER; } -static int netvsc_vf_join(struct net_device *vf_netdev, - struct net_device *ndev) +static int netvsc_vf_join(struct net_device *ndev, + struct net_device *vf_netdev) { struct net_device_context *ndev_ctx = netdev_priv(ndev); - int ret; - - ret = netdev_rx_handler_register(vf_netdev, - netvsc_vf_handle_frame, ndev); - if (ret != 0) { - netdev_err(vf_netdev, - "can not register netvsc VF receive handler (err = %d)\n", - ret); - goto rx_handler_failed; - } - - ret = netdev_upper_dev_link(vf_netdev, ndev, NULL); - if (ret != 0) { - netdev_err(vf_netdev, - "can not set master device %s (err = %d)\n", - ndev->name, ret); - goto upper_link_failed; - } - - /* set slave flag before open to prevent IPv6 addrconf */ - vf_netdev->flags |= IFF_SLAVE; schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT); - call_netdevice_notifiers(NETDEV_JOIN, vf_netdev); - netdev_info(vf_netdev, "joined to %s\n", ndev->name); - return 0; -upper_link_failed: - netdev_rx_handler_unregister(vf_netdev); -rx_handler_failed: - return ret; + dev_hold(vf_netdev); + rcu_assign_pointer(ndev_ctx->vf_netdev, vf_netdev); + + return 0; } static void __netvsc_vf_setup(struct net_device *ndev, @@ -1914,85 +1851,84 @@ static void netvsc_vf_setup(struct work_struct *w) rtnl_unlock(); } -static int netvsc_register_vf(struct net_device *vf_netdev) +static int netvsc_register_vf(struct net_device *ndev, + struct net_device *vf_netdev) { - struct net_device *ndev; struct net_device_context *net_device_ctx; struct netvsc_device *netvsc_dev; - if (vf_netdev->addr_len != ETH_ALEN) - return NOTIFY_DONE; - - /* - * We will use the MAC address to locate the synthetic interface to - * associate with the VF interface. If we don't find a matching - * synthetic interface, move on. - */ - ndev = get_netvsc_bymac(vf_netdev->perm_addr); - if (!ndev) - return NOTIFY_DONE; - net_device_ctx = netdev_priv(ndev); netvsc_dev = rtnl_dereference(net_device_ctx->nvdev); if (!netvsc_dev || rtnl_dereference(net_device_ctx->vf_netdev)) - return NOTIFY_DONE; - - if (netvsc_vf_join(vf_netdev, ndev) != 0) - return NOTIFY_DONE; + return -EEXIST; netdev_info(ndev, "VF registering: %s\n", vf_netdev->name); - dev_hold(vf_netdev); - rcu_assign_pointer(net_device_ctx->vf_netdev, vf_netdev); - return NOTIFY_OK; + return 0; } /* VF up/down change detected, schedule to change data path */ -static int netvsc_vf_changed(struct net_device *vf_netdev) +static int netvsc_vf_changed(struct net_device *ndev, + struct net_device *vf_netdev) { struct net_device_context *net_device_ctx; struct netvsc_device *netvsc_dev; - struct net_device *ndev; bool vf_is_up = netif_running(vf_netdev); - ndev = get_netvsc_byref(vf_netdev); - if (!ndev) - return NOTIFY_DONE; - net_device_ctx = netdev_priv(ndev); netvsc_dev = rtnl_dereference(net_device_ctx->nvdev); if (!netvsc_dev) - return NOTIFY_DONE; + return -EINVAL; netvsc_switch_datapath(ndev, vf_is_up); netdev_info(ndev, "Data path switched %s VF: %s\n", vf_is_up ? "to" : "from", vf_netdev->name); - return NOTIFY_OK; + return 0; } -static int netvsc_unregister_vf(struct net_device *vf_netdev) +static int netvsc_release_vf(struct net_device *ndev, + struct net_device *vf_netdev) { - struct net_device *ndev; struct net_device_context *net_device_ctx; - ndev = get_netvsc_byref(vf_netdev); - if (!ndev) - return NOTIFY_DONE; - net_device_ctx = netdev_priv(ndev); + if (vf_netdev != rtnl_dereference(net_device_ctx->vf_netdev)) + return -EINVAL; + cancel_delayed_work_sync(&net_device_ctx->vf_takeover); + return 0; +} + +static int netvsc_unregister_vf(struct net_device *ndev, + struct net_device *vf_netdev) +{ + struct net_device_context *net_device_ctx; + + net_device_ctx = netdev_priv(ndev); + if (vf_netdev != rtnl_dereference(net_device_ctx->vf_netdev)) + return -EINVAL; + netdev_info(ndev, "VF unregistering: %s\n", vf_netdev->name); - netdev_rx_handler_unregister(vf_netdev); - netdev_upper_dev_unlink(vf_netdev, ndev); RCU_INIT_POINTER(net_device_ctx->vf_netdev, NULL); dev_put(vf_netdev); - return NOTIFY_OK; + return 0; } +static const struct bypass_ops netvsc_bypass_ops = { + .register_child = netvsc_register_vf, + .join_child = netvsc_vf_join, + .unregister_child = netvsc_unregister_vf, + .release_child = netvsc_release_vf, + .update_link = netvsc_vf_changed, + .handle_frame = netvsc_vf_handle_frame, +}; + +static struct bypass *netvsc_bypass; + static int netvsc_probe(struct hv_device *dev, const struct hv_vmbus_device_id *dev_id) { @@ -2082,8 +2018,14 @@ static int netvsc_probe(struct hv_device *dev, goto register_failed; } + ret = bypass_register_instance(netvsc_bypass, net); + if (ret != 0) + goto err_bypass; + return ret; +err_bypass: + unregister_netdev(net); register_failed: rndis_filter_device_remove(dev, nvdev); rndis_failed: @@ -2124,13 +2066,15 @@ static int netvsc_remove(struct hv_device *dev) rtnl_lock(); vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); if (vf_netdev) - netvsc_unregister_vf(vf_netdev); + bypass_unregister_child(vf_netdev); if (nvdev) rndis_filter_device_remove(dev, nvdev); unregister_netdevice(net); + bypass_unregister_instance(netvsc_bypass, net); + rtnl_unlock(); rcu_read_unlock(); @@ -2157,61 +2101,21 @@ static struct hv_driver netvsc_drv = { .remove = netvsc_remove, }; -/* - * On Hyper-V, every VF interface is matched with a corresponding - * synthetic interface. The synthetic interface is presented first - * to the guest. When the corresponding VF instance is registered, - * we will take care of switching the data path. - */ -static int netvsc_netdev_event(struct notifier_block *this, - unsigned long event, void *ptr) -{ - struct net_device *event_dev = netdev_notifier_info_to_dev(ptr); - - /* Skip our own events */ - if (event_dev->netdev_ops == &device_ops) - return NOTIFY_DONE; - - /* Avoid non-Ethernet type devices */ - if (event_dev->type != ARPHRD_ETHER) - return NOTIFY_DONE; - - /* Avoid Vlan dev with same MAC registering as VF */ - if (is_vlan_dev(event_dev)) - return NOTIFY_DONE; - - /* Avoid Bonding master dev with same MAC registering as VF */ - if ((event_dev->priv_flags & IFF_BONDING) && - (event_dev->flags & IFF_MASTER)) - return NOTIFY_DONE; - - switch (event) { - case NETDEV_REGISTER: - return netvsc_register_vf(event_dev); - case NETDEV_UNREGISTER: - return netvsc_unregister_vf(event_dev); - case NETDEV_UP: - case NETDEV_DOWN: - return netvsc_vf_changed(event_dev); - default: - return NOTIFY_DONE; - } -} - -static struct notifier_block netvsc_netdev_notifier = { - .notifier_call = netvsc_netdev_event, -}; - static void __exit netvsc_drv_exit(void) { - unregister_netdevice_notifier(&netvsc_netdev_notifier); vmbus_driver_unregister(&netvsc_drv); + bypass_unregister_driver(netvsc_bypass); } static int __init netvsc_drv_init(void) { int ret; + netvsc_bypass = bypass_register_driver(&netvsc_bypass_ops, + &device_ops); + if (!netvsc_bypass) + return -ENOMEM; + if (ring_size < RING_SIZE_MIN) { ring_size = RING_SIZE_MIN; pr_info("Increased ring_size to %u (min allowed)\n", @@ -2221,10 +2125,11 @@ static int __init netvsc_drv_init(void) netvsc_ring_reciprocal = reciprocal_value(netvsc_ring_bytes); ret = vmbus_driver_register(&netvsc_drv); - if (ret) + if (ret) { + bypass_unregister_driver(netvsc_bypass); return ret; + } - register_netdevice_notifier(&netvsc_netdev_notifier); return 0; } -- 2.14.3
Jiri Pirko
2018-Apr-06 12:48 UTC
[RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
Thu, Apr 05, 2018 at 11:08:22PM CEST, sridhar.samudrala at intel.com wrote:>This patch enables virtio_net to switch over to a VF datapath when a VF >netdev is present with the same MAC address. It allows live migration >of a VM with a direct attached VF without the need to setup a bond/team >between a VF and virtio net device in the guest. > >The hypervisor needs to enable only one datapath at any time so that >packets don't get looped back to the VM over the other datapath. When a VF >is plugged, the virtio datapath link state can be marked as down. The >hypervisor needs to unplug the VF device from the guest on the source host >and reset the MAC filter of the VF to initiate failover of datapath to >virtio before starting the migration. After the migration is completed, >the destination hypervisor sets the MAC filter on the VF and plugs it back >to the guest to switch over to VF datapath. > >When BACKUP feature is enabled, an additional netdev(bypass netdev) is >created that acts as a master device and tracks the state of the 2 lower >netdevs. The original virtio_net netdev is marked as 'backup' netdev and a >passthru device with the same MAC is registered as 'active' netdev. > >This patch is based on the discussion initiated by Jesse on this thread. >https://marc.info/?l=linux-virtualization&m=151189725224231&w=2 > >Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> >--- > drivers/net/Kconfig | 1 + > drivers/net/virtio_net.c | 612 ++++++++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 612 insertions(+), 1 deletion(-) > >diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig >index 891846655000..9e2cf61fd1c1 100644 >--- a/drivers/net/Kconfig >+++ b/drivers/net/Kconfig >@@ -331,6 +331,7 @@ config VETH > config VIRTIO_NET > tristate "Virtio network driver" > depends on VIRTIO >+ depends on MAY_USE_BYPASS > ---help--- > This is the virtual network driver for virtio. It can be used with > QEMU based VMMs (like KVM or Xen). Say Y or M. >diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c >index befb5944f3fd..86b2f8f2947d 100644 >--- a/drivers/net/virtio_net.c >+++ b/drivers/net/virtio_net.c >@@ -30,8 +30,11 @@ > #include <linux/cpu.h> > #include <linux/average.h> > #include <linux/filter.h> >+#include <linux/netdevice.h> >+#include <linux/pci.h> > #include <net/route.h> > #include <net/xdp.h> >+#include <net/bypass.h> > > static int napi_weight = NAPI_POLL_WEIGHT; > module_param(napi_weight, int, 0444); >@@ -206,6 +209,9 @@ struct virtnet_info { > u32 speed; > > unsigned long guest_offloads; >+ >+ /* upper netdev created when BACKUP feature enabled */ >+ struct net_device __rcu *bypass_netdev; > }; > > struct padded_vnet_hdr { >@@ -2275,6 +2281,22 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp) > } > } > >+static int virtnet_get_phys_port_name(struct net_device *dev, char *buf, >+ size_t len) >+{ >+ struct virtnet_info *vi = netdev_priv(dev); >+ int ret; >+ >+ if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_BACKUP)) >+ return -EOPNOTSUPP; >+ >+ ret = snprintf(buf, len, "_bkup"); >+ if (ret >= len) >+ return -EOPNOTSUPP; >+ >+ return 0; >+} >+ > static const struct net_device_ops virtnet_netdev = { > .ndo_open = virtnet_open, > .ndo_stop = virtnet_close, >@@ -2292,6 +2314,7 @@ static const struct net_device_ops virtnet_netdev = { > .ndo_xdp_xmit = virtnet_xdp_xmit, > .ndo_xdp_flush = virtnet_xdp_flush, > .ndo_features_check = passthru_features_check, >+ .ndo_get_phys_port_name = virtnet_get_phys_port_name, > }; > > static void virtnet_config_changed_work(struct work_struct *work) >@@ -2689,6 +2712,576 @@ static int virtnet_validate(struct virtio_device *vdev) > return 0; > } > >+/* START of functions supporting VIRTIO_NET_F_BACKUP feature. >+ * When BACKUP feature is enabled, an additional netdev(bypass netdev) >+ * is created that acts as a master device and tracks the state of the >+ * 2 lower netdevs. The original virtio_net netdev is registered as >+ * 'backup' netdev and a passthru device with the same MAC is registered >+ * as 'active' netdev. >+ */ >+ >+/* bypass state maintained when BACKUP feature is enabled */ >+struct virtnet_bypass_info { >+ /* passthru netdev with same MAC */ >+ struct net_device __rcu *active_netdev; >+ >+ /* virtio_net netdev */ >+ struct net_device __rcu *backup_netdev; >+ >+ /* active netdev stats */ >+ struct rtnl_link_stats64 active_stats; >+ >+ /* backup netdev stats */ >+ struct rtnl_link_stats64 backup_stats; >+ >+ /* aggregated stats */ >+ struct rtnl_link_stats64 bypass_stats; >+ >+ /* spinlock while updating stats */ >+ spinlock_t stats_lock; >+}; >+ >+static int virtnet_bypass_open(struct net_device *dev) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ struct net_device *active_netdev, *backup_netdev; >+ int err; >+ >+ netif_carrier_off(dev); >+ netif_tx_wake_all_queues(dev); >+ >+ active_netdev = rtnl_dereference(vbi->active_netdev); >+ if (active_netdev) { >+ err = dev_open(active_netdev); >+ if (err) >+ goto err_active_open; >+ } >+ >+ backup_netdev = rtnl_dereference(vbi->backup_netdev); >+ if (backup_netdev) { >+ err = dev_open(backup_netdev); >+ if (err) >+ goto err_backup_open; >+ }This should be moved to bypass module. See "***" below.>+ >+ return 0; >+ >+err_backup_open: >+ dev_close(active_netdev); >+err_active_open: >+ netif_tx_disable(dev); >+ return err; >+} >+ >+static int virtnet_bypass_close(struct net_device *dev) >+{ >+ struct virtnet_bypass_info *vi = netdev_priv(dev); >+ struct net_device *child_netdev; >+ >+ netif_tx_disable(dev); >+ >+ child_netdev = rtnl_dereference(vi->active_netdev); >+ if (child_netdev) >+ dev_close(child_netdev); >+ >+ child_netdev = rtnl_dereference(vi->backup_netdev); >+ if (child_netdev) >+ dev_close(child_netdev);This should be moved to bypass module.>+ >+ return 0; >+} >+ >+static netdev_tx_t virtnet_bypass_drop_xmit(struct sk_buff *skb, >+ struct net_device *dev) >+{ >+ atomic_long_inc(&dev->tx_dropped); >+ dev_kfree_skb_any(skb); >+ return NETDEV_TX_OK; >+} >+ >+static bool virtnet_bypass_xmit_ready(struct net_device *dev) >+{ >+ return netif_running(dev) && netif_carrier_ok(dev); >+} >+ >+static netdev_tx_t virtnet_bypass_start_xmit(struct sk_buff *skb, >+ struct net_device *dev) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ struct net_device *xmit_dev; >+ >+ /* Try xmit via active netdev followed by backup netdev */ >+ xmit_dev = rcu_dereference_bh(vbi->active_netdev); >+ if (!xmit_dev || !virtnet_bypass_xmit_ready(xmit_dev)) { >+ xmit_dev = rcu_dereference_bh(vbi->backup_netdev);This should be moved to bypass module.>+ if (!xmit_dev || !virtnet_bypass_xmit_ready(xmit_dev)) >+ return virtnet_bypass_drop_xmit(skb, dev); >+ } >+ >+ skb->dev = xmit_dev; >+ skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping; >+ >+ return dev_queue_xmit(skb); >+} >+ >+static u16 virtnet_bypass_select_queue(struct net_device *dev, >+ struct sk_buff *skb, void *accel_priv, >+ select_queue_fallback_t fallback) >+{ >+ /* This helper function exists to help dev_pick_tx get the correct >+ * destination queue. Using a helper function skips a call to >+ * skb_tx_hash and will put the skbs in the queue we expect on their >+ * way down to the bonding driver. >+ */ >+ u16 txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0; >+ >+ /* Save the original txq to restore before passing to the driver */ >+ qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping; >+ >+ if (unlikely(txq >= dev->real_num_tx_queues)) { >+ do { >+ txq -= dev->real_num_tx_queues; >+ } while (txq >= dev->real_num_tx_queues); >+ } >+ >+ return txq; >+} >+ >+/* fold stats, assuming all rtnl_link_stats64 fields are u64, but >+ * that some drivers can provide 32bit values only. >+ */ >+static void virtnet_bypass_fold_stats(struct rtnl_link_stats64 *_res, >+ const struct rtnl_link_stats64 *_new, >+ const struct rtnl_link_stats64 *_old) >+{ >+ const u64 *new = (const u64 *)_new; >+ const u64 *old = (const u64 *)_old; >+ u64 *res = (u64 *)_res; >+ int i; >+ >+ for (i = 0; i < sizeof(*_res) / sizeof(u64); i++) { >+ u64 nv = new[i]; >+ u64 ov = old[i]; >+ s64 delta = nv - ov; >+ >+ /* detects if this particular field is 32bit only */ >+ if (((nv | ov) >> 32) == 0) >+ delta = (s64)(s32)((u32)nv - (u32)ov); >+ >+ /* filter anomalies, some drivers reset their stats >+ * at down/up events. >+ */ >+ if (delta > 0) >+ res[i] += delta; >+ } >+} >+ >+static void virtnet_bypass_get_stats(struct net_device *dev, >+ struct rtnl_link_stats64 *stats) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ const struct rtnl_link_stats64 *new; >+ struct rtnl_link_stats64 temp; >+ struct net_device *child_netdev; >+ >+ spin_lock(&vbi->stats_lock); >+ memcpy(stats, &vbi->bypass_stats, sizeof(*stats)); >+ >+ rcu_read_lock(); >+ >+ child_netdev = rcu_dereference(vbi->active_netdev); >+ if (child_netdev) { >+ new = dev_get_stats(child_netdev, &temp); >+ virtnet_bypass_fold_stats(stats, new, &vbi->active_stats); >+ memcpy(&vbi->active_stats, new, sizeof(*new)); >+ } >+ >+ child_netdev = rcu_dereference(vbi->backup_netdev); >+ if (child_netdev) { >+ new = dev_get_stats(child_netdev, &temp); >+ virtnet_bypass_fold_stats(stats, new, &vbi->backup_stats); >+ memcpy(&vbi->backup_stats, new, sizeof(*new)); >+ } >+ >+ rcu_read_unlock(); >+ >+ memcpy(&vbi->bypass_stats, stats, sizeof(*stats)); >+ spin_unlock(&vbi->stats_lock); >+}This should be moved to bypass module.>+ >+static int virtnet_bypass_change_mtu(struct net_device *dev, int new_mtu) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ struct net_device *active_netdev, *backup_netdev; >+ int ret = 0; >+ >+ active_netdev = rcu_dereference(vbi->active_netdev); >+ if (active_netdev) { >+ ret = dev_set_mtu(active_netdev, new_mtu); >+ if (ret) >+ return ret; >+ } >+ >+ backup_netdev = rcu_dereference(vbi->backup_netdev); >+ if (backup_netdev) { >+ ret = dev_set_mtu(backup_netdev, new_mtu); >+ if (ret) { >+ dev_set_mtu(active_netdev, dev->mtu); >+ return ret; >+ } >+ } >+ >+ dev->mtu = new_mtu; >+ return 0; >+}This should be moved to bypass module.>+ >+static void virtnet_bypass_set_rx_mode(struct net_device *dev) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ struct net_device *child_netdev; >+ >+ rcu_read_lock(); >+ >+ child_netdev = rcu_dereference(vbi->active_netdev); >+ if (child_netdev) { >+ dev_uc_sync_multiple(child_netdev, dev); >+ dev_mc_sync_multiple(child_netdev, dev); >+ } >+ >+ child_netdev = rcu_dereference(vbi->backup_netdev); >+ if (child_netdev) { >+ dev_uc_sync_multiple(child_netdev, dev); >+ dev_mc_sync_multiple(child_netdev, dev); >+ } >+ >+ rcu_read_unlock(); >+}This should be moved to bypass module.>+ >+static const struct net_device_ops virtnet_bypass_netdev_ops = { >+ .ndo_open = virtnet_bypass_open, >+ .ndo_stop = virtnet_bypass_close, >+ .ndo_start_xmit = virtnet_bypass_start_xmit, >+ .ndo_select_queue = virtnet_bypass_select_queue, >+ .ndo_get_stats64 = virtnet_bypass_get_stats, >+ .ndo_change_mtu = virtnet_bypass_change_mtu, >+ .ndo_set_rx_mode = virtnet_bypass_set_rx_mode, >+ .ndo_validate_addr = eth_validate_addr, >+ .ndo_features_check = passthru_features_check, >+}; >+ >+static int >+virtnet_bypass_ethtool_get_link_ksettings(struct net_device *dev, >+ struct ethtool_link_ksettings *cmd) >+{ >+ struct virtnet_bypass_info *vbi = netdev_priv(dev); >+ struct net_device *child_netdev; >+ >+ child_netdev = rtnl_dereference(vbi->active_netdev); >+ if (!child_netdev || !virtnet_bypass_xmit_ready(child_netdev)) { >+ child_netdev = rtnl_dereference(vbi->backup_netdev); >+ if (!child_netdev || !virtnet_bypass_xmit_ready(child_netdev)) { >+ cmd->base.duplex = DUPLEX_UNKNOWN; >+ cmd->base.port = PORT_OTHER; >+ cmd->base.speed = SPEED_UNKNOWN; >+ >+ return 0; >+ } >+ } >+ >+ return __ethtool_get_link_ksettings(child_netdev, cmd); >+} >+ >+#define BYPASS_DRV_NAME "virtnet_bypass" >+#define BYPASS_DRV_VERSION "0.1" >+ >+static void virtnet_bypass_ethtool_get_drvinfo(struct net_device *dev, >+ struct ethtool_drvinfo *drvinfo) >+{ >+ strlcpy(drvinfo->driver, BYPASS_DRV_NAME, sizeof(drvinfo->driver)); >+ strlcpy(drvinfo->version, BYPASS_DRV_VERSION, sizeof(drvinfo->version)); >+} >+ >+static const struct ethtool_ops virtnet_bypass_ethtool_ops = { >+ .get_drvinfo = virtnet_bypass_ethtool_get_drvinfo, >+ .get_link = ethtool_op_get_link, >+ .get_link_ksettings = virtnet_bypass_ethtool_get_link_ksettings, >+}; >+ >+static int virtnet_bypass_join_child(struct net_device *bypass_netdev, >+ struct net_device *child_netdev) >+{ >+ struct virtnet_bypass_info *vbi; >+ bool backup; >+ >+ vbi = netdev_priv(bypass_netdev); >+ backup = (child_netdev->dev.parent == bypass_netdev->dev.parent); >+ if (backup ? rtnl_dereference(vbi->backup_netdev) : >+ rtnl_dereference(vbi->active_netdev)) { >+ netdev_info(bypass_netdev, >+ "%s attempting to join bypass dev when %s already present\n", >+ child_netdev->name, backup ? "backup" : "active");Bypass module should check if there is already some other netdev enslaved and refuse right there. The active/backup terminology is quite confusing. From the bonding world that means active is the one which is currently used for tx of the packets. And it depends on link and other things what netdev is declared active. However here, it is different. Backup is always the virtio_net instance even when it is active. Odd. Please change the terminology. For "active" I suggest to use name "stolen". *** Also, the 2 slave netdev pointers should be stored in the bypass module instance, not in the drivers.>+ return -EEXIST; >+ } >+ >+ dev_hold(child_netdev); >+ >+ if (backup) { >+ rcu_assign_pointer(vbi->backup_netdev, child_netdev); >+ dev_get_stats(vbi->backup_netdev, &vbi->backup_stats); >+ } else { >+ rcu_assign_pointer(vbi->active_netdev, child_netdev); >+ dev_get_stats(vbi->active_netdev, &vbi->active_stats); >+ bypass_netdev->min_mtu = child_netdev->min_mtu; >+ bypass_netdev->max_mtu = child_netdev->max_mtu; >+ } >+ >+ netdev_info(bypass_netdev, "child:%s joined\n", child_netdev->name); >+ >+ return 0; >+} >+ >+static int virtnet_bypass_register_child(struct net_device *bypass_netdev, >+ struct net_device *child_netdev) >+{ >+ struct virtnet_bypass_info *vbi; >+ bool backup; >+ >+ vbi = netdev_priv(bypass_netdev); >+ backup = (child_netdev->dev.parent == bypass_netdev->dev.parent); >+ if (backup ? rtnl_dereference(vbi->backup_netdev) : >+ rtnl_dereference(vbi->active_netdev)) { >+ netdev_info(bypass_netdev, >+ "%s attempting to register bypass dev when %s already present\n", >+ child_netdev->name, backup ? "backup" : "active"); >+ return -EEXIST; >+ } >+ >+ /* Avoid non pci devices as active netdev */ >+ if (!backup && (!child_netdev->dev.parent || >+ !dev_is_pci(child_netdev->dev.parent))) >+ return -EINVAL; >+ >+ netdev_info(bypass_netdev, "child:%s registered\n", child_netdev->name); >+ >+ return 0; >+} >+ >+static int virtnet_bypass_release_child(struct net_device *bypass_netdev, >+ struct net_device *child_netdev) >+{ >+ struct net_device *backup_netdev, *active_netdev; >+ struct virtnet_bypass_info *vbi; >+ >+ vbi = netdev_priv(bypass_netdev); >+ active_netdev = rtnl_dereference(vbi->active_netdev); >+ backup_netdev = rtnl_dereference(vbi->backup_netdev); >+ >+ if (child_netdev != active_netdev && child_netdev != backup_netdev) >+ return -EINVAL; >+ >+ netdev_info(bypass_netdev, "child:%s released\n", child_netdev->name); >+ >+ return 0; >+} >+ >+static int virtnet_bypass_unregister_child(struct net_device *bypass_netdev, >+ struct net_device *child_netdev) >+{ >+ struct net_device *backup_netdev, *active_netdev; >+ struct virtnet_bypass_info *vbi; >+ >+ vbi = netdev_priv(bypass_netdev); >+ active_netdev = rtnl_dereference(vbi->active_netdev); >+ backup_netdev = rtnl_dereference(vbi->backup_netdev); >+ >+ if (child_netdev != active_netdev && child_netdev != backup_netdev) >+ return -EINVAL; >+ >+ if (child_netdev == backup_netdev) { >+ RCU_INIT_POINTER(vbi->backup_netdev, NULL); >+ } else { >+ RCU_INIT_POINTER(vbi->active_netdev, NULL); >+ if (backup_netdev) { >+ bypass_netdev->min_mtu = backup_netdev->min_mtu; >+ bypass_netdev->max_mtu = backup_netdev->max_mtu; >+ } >+ } >+ >+ dev_put(child_netdev); >+ >+ netdev_info(bypass_netdev, "child:%s unregistered\n", >+ child_netdev->name); >+ >+ return 0; >+} >+ >+static int virtnet_bypass_update_link(struct net_device *bypass_netdev, >+ struct net_device *child_netdev) >+{ >+ struct net_device *active_netdev, *backup_netdev; >+ struct virtnet_bypass_info *vbi; >+ >+ if (!netif_running(bypass_netdev)) >+ return 0; >+ >+ vbi = netdev_priv(bypass_netdev); >+ >+ active_netdev = rtnl_dereference(vbi->active_netdev); >+ backup_netdev = rtnl_dereference(vbi->backup_netdev); >+ >+ if (child_netdev != active_netdev && child_netdev != backup_netdev) >+ return -EINVAL; >+ >+ if ((active_netdev && virtnet_bypass_xmit_ready(active_netdev)) || >+ (backup_netdev && virtnet_bypass_xmit_ready(backup_netdev))) { >+ netif_carrier_on(bypass_netdev); >+ netif_tx_wake_all_queues(bypass_netdev); >+ } else { >+ netif_carrier_off(bypass_netdev); >+ netif_tx_stop_all_queues(bypass_netdev); >+ } >+ >+ return 0; >+} >+ >+/* Called when child dev is injecting data into network stack. >+ * Change the associated network device from lower dev to virtio. >+ * note: already called with rcu_read_lock >+ */ >+static rx_handler_result_t virtnet_bypass_handle_frame(struct sk_buff **pskb) >+{ >+ struct sk_buff *skb = *pskb; >+ struct net_device *ndev = rcu_dereference(skb->dev->rx_handler_data); >+ >+ skb->dev = ndev; >+ >+ return RX_HANDLER_ANOTHER; >+}Hmm, you have the rx_handler defined in drivers and you register it in bypass module. It is odd because here you assume that the bypass module passed "ndev" and rx_handler_data. Also, you don't need advanced features rx_handler provides. Instead, just register a common rx_handler defined in the bypass module and have simple skb_rx callback here (static void).>+ >+static const struct bypass_ops virtnet_bypass_ops = { >+ .register_child = virtnet_bypass_register_child, >+ .join_child = virtnet_bypass_join_child, >+ .unregister_child = virtnet_bypass_unregister_child, >+ .release_child = virtnet_bypass_release_child, >+ .update_link = virtnet_bypass_update_link, >+ .handle_frame = virtnet_bypass_handle_frame, >+}; >+ >+static struct bypass *virtnet_bypass; >+ >+static int virtnet_bypass_create(struct virtnet_info *vi) >+{ >+ struct net_device *backup_netdev = vi->dev; >+ struct device *dev = &vi->vdev->dev; >+ struct net_device *bypass_netdev; >+ int res; >+ >+ /* Alloc at least 2 queues, for now we are going with 16 assuming >+ * that most devices being bonded won't have too many queues. >+ */ >+ bypass_netdev = alloc_etherdev_mq(sizeof(struct virtnet_bypass_info), >+ 16); >+ if (!bypass_netdev) { >+ dev_err(dev, "Unable to allocate bypass_netdev!\n"); >+ return -ENOMEM; >+ } >+ >+ dev_net_set(bypass_netdev, dev_net(backup_netdev)); >+ SET_NETDEV_DEV(bypass_netdev, dev); >+ >+ bypass_netdev->netdev_ops = &virtnet_bypass_netdev_ops; >+ bypass_netdev->ethtool_ops = &virtnet_bypass_ethtool_ops; >+ >+ /* Initialize the device options */ >+ bypass_netdev->flags |= IFF_MASTER;I think I pointed that out already. Don't use "IFF_MASTER". That is specific to bonding. As I suggested in the reply to the patch #2, you should introduce IFF_BYPASS. Also, this flag should be set by the bypass module. Just create the netdev and do things specific to virtio and then call to bypass module, pass the netdev so it can do the rest. I think that the flags, features etc would be also fine to set there.>+ bypass_netdev->priv_flags |= IFF_UNICAST_FLT | IFF_NO_QUEUE; >+ bypass_netdev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | >+ IFF_TX_SKB_SHARING); >+ >+ /* don't acquire bypass netdev's netif_tx_lock when transmitting */ >+ bypass_netdev->features |= NETIF_F_LLTX; >+ >+ /* Don't allow bypass devices to change network namespaces. */ >+ bypass_netdev->features |= NETIF_F_NETNS_LOCAL; >+ >+ bypass_netdev->hw_features = NETIF_F_HW_CSUM | NETIF_F_SG | >+ NETIF_F_FRAGLIST | NETIF_F_ALL_TSO | >+ NETIF_F_HIGHDMA | NETIF_F_LRO; >+ >+ bypass_netdev->hw_features |= NETIF_F_GSO_ENCAP_ALL; >+ bypass_netdev->features |= bypass_netdev->hw_features; >+ >+ /* For now treat bypass netdev as VLAN challenged since we >+ * cannot assume VLAN functionality with a VFWhy? I don't see such drivers. But to be 100% correct, you should check the NETIF_F_VLAN_CHALLENGED feature in bypass module during VF enslave and forbid to do so if it is on.>+ */ >+ bypass_netdev->features |= NETIF_F_VLAN_CHALLENGED; >+ >+ memcpy(bypass_netdev->dev_addr, backup_netdev->dev_addr, >+ bypass_netdev->addr_len); >+ >+ bypass_netdev->min_mtu = backup_netdev->min_mtu; >+ bypass_netdev->max_mtu = backup_netdev->max_mtu; >+ >+ res = register_netdev(bypass_netdev); >+ if (res < 0) { >+ dev_err(dev, "Unable to register bypass_netdev!\n"); >+ goto err_register_netdev; >+ } >+ >+ netif_carrier_off(bypass_netdev); >+ >+ res = bypass_register_instance(virtnet_bypass, bypass_netdev); >+ if (res < 0) >+ goto err_bypass; >+ >+ rcu_assign_pointer(vi->bypass_netdev, bypass_netdev); >+ >+ return 0; >+ >+err_bypass: >+ unregister_netdev(bypass_netdev); >+err_register_netdev: >+ free_netdev(bypass_netdev); >+ >+ return res; >+} >+ >+static void virtnet_bypass_destroy(struct virtnet_info *vi) >+{ >+ struct net_device *bypass_netdev; >+ struct virtnet_bypass_info *vbi; >+ struct net_device *child_netdev; >+ >+ bypass_netdev = rcu_dereference(vi->bypass_netdev); >+ /* no device found, nothing to free */ >+ if (!bypass_netdev) >+ return; >+ >+ vbi = netdev_priv(bypass_netdev); >+ >+ netif_device_detach(bypass_netdev); >+ >+ rtnl_lock(); >+ >+ child_netdev = rtnl_dereference(vbi->active_netdev); >+ if (child_netdev) >+ bypass_unregister_child(child_netdev); >+ >+ child_netdev = rtnl_dereference(vbi->backup_netdev); >+ if (child_netdev) >+ bypass_unregister_child(child_netdev); >+ >+ unregister_netdevice(bypass_netdev); >+ >+ bypass_unregister_instance(virtnet_bypass, bypass_netdev); >+ >+ rtnl_unlock(); >+ >+ free_netdev(bypass_netdev); >+} >+ >+/* END of functions supporting VIRTIO_NET_F_BACKUP feature. */ >+ > static int virtnet_probe(struct virtio_device *vdev) > { > int i, err = -ENOMEM; >@@ -2839,10 +3432,15 @@ static int virtnet_probe(struct virtio_device *vdev) > > virtnet_init_settings(dev); > >+ if (virtio_has_feature(vdev, VIRTIO_NET_F_BACKUP)) { >+ if (virtnet_bypass_create(vi) != 0)You need to do: err = virtnet_bypass_create(vi); if (err) otherwise you ignore err and virtnet_probe would return 0;>+ goto free_vqs; >+ } >+ > err = register_netdev(dev); > if (err) { > pr_debug("virtio_net: registering device failed\n"); >- goto free_vqs; >+ goto free_bypass; > } > > virtio_device_ready(vdev); >@@ -2879,6 +3477,8 @@ static int virtnet_probe(struct virtio_device *vdev) > vi->vdev->config->reset(vdev); > > unregister_netdev(dev); >+free_bypass: >+ virtnet_bypass_destroy(vi); > free_vqs: > cancel_delayed_work_sync(&vi->refill); > free_receive_page_frags(vi); >@@ -2913,6 +3513,8 @@ static void virtnet_remove(struct virtio_device *vdev) > > unregister_netdev(vi->dev); > >+ virtnet_bypass_destroy(vi); >+ > remove_vq_common(vi); > > free_netdev(vi->dev); >@@ -2996,6 +3598,11 @@ static __init int virtio_net_driver_init(void) > { > int ret; > >+ virtnet_bypass = bypass_register_driver(&virtnet_bypass_ops, >+ &virtnet_bypass_netdev_ops); >+ if (!virtnet_bypass) >+ return -ENOMEM;If CONFIG_NET_BYPASS is undefined, you will always return -ENOMEM here.>+ > ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "virtio/net:online", > virtnet_cpu_online, > virtnet_cpu_down_prep); >@@ -3010,12 +3617,14 @@ static __init int virtio_net_driver_init(void) > ret = register_virtio_driver(&virtio_net_driver); > if (ret) > goto err_virtio; >+ > return 0; > err_virtio: > cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); > err_dead: > cpuhp_remove_multi_state(virtionet_online); > out: >+ bypass_unregister_driver(virtnet_bypass); > return ret; > } > module_init(virtio_net_driver_init); >@@ -3025,6 +3634,7 @@ static __exit void virtio_net_driver_exit(void) > unregister_virtio_driver(&virtio_net_driver); > cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); > cpuhp_remove_multi_state(virtionet_online); >+ bypass_unregister_driver(virtnet_bypass); > } > module_exit(virtio_net_driver_exit); > >-- >2.14.3 >
Jiri Pirko
2018-Apr-06 12:57 UTC
[RFC PATCH net-next v5 2/4] net: Introduce generic bypass module
Thu, Apr 05, 2018 at 11:08:21PM CEST, sridhar.samudrala at intel.com wrote:>This provides a generic interface for paravirtual drivers to listen >for netdev register/unregister/link change events from pci ethernet >devices with the same MAC and takeover their datapath. The notifier and >event handling code is based on the existing netvsc implementation. A >paravirtual driver can use this module by registering a set of ops and >each instance of the device when it is probed. > >Signed-off-by: Sridhar Samudrala <sridhar.samudrala at intel.com> >--- > include/net/bypass.h | 80 ++++++++++ > net/Kconfig | 18 +++ > net/core/Makefile | 1 + > net/core/bypass.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 505 insertions(+) > create mode 100644 include/net/bypass.h > create mode 100644 net/core/bypass.c > >diff --git a/include/net/bypass.h b/include/net/bypass.h >new file mode 100644 >index 000000000000..e2dd122f951a >--- /dev/null >+++ b/include/net/bypass.h >@@ -0,0 +1,80 @@ >+// SPDX-License-Identifier: GPL-2.0 >+/* Copyright (c) 2018, Intel Corporation. */ >+ >+#ifndef _NET_BYPASS_H >+#define _NET_BYPASS_H >+ >+#include <linux/netdevice.h> >+ >+struct bypass_ops {Perhaps "net_bypass_" would be better prefix for this module structs and functions. No strong opinion though.>+ int (*register_child)(struct net_device *bypass_netdev, >+ struct net_device *child_netdev);We have master/slave upper/lower netdevices. This adds "child". Consider using some existing names. Not sure if possible without loss of meaning.>+ int (*join_child)(struct net_device *bypass_netdev, >+ struct net_device *child_netdev); >+ int (*unregister_child)(struct net_device *bypass_netdev, >+ struct net_device *child_netdev); >+ int (*release_child)(struct net_device *bypass_netdev, >+ struct net_device *child_netdev); >+ int (*update_link)(struct net_device *bypass_netdev, >+ struct net_device *child_netdev); >+ rx_handler_result_t (*handle_frame)(struct sk_buff **pskb); >+}; >+ >+struct bypass_instance { >+ struct list_head list; >+ struct net_device __rcu *bypass_netdev; >+ struct bypass *bypass; >+}; >+ >+struct bypass { >+ struct list_head list; >+ const struct bypass_ops *ops; >+ const struct net_device_ops *netdev_ops; >+ struct list_head instance_list; >+ struct mutex lock; >+}; >+ >+#if IS_ENABLED(CONFIG_NET_BYPASS) >+ >+struct bypass *bypass_register_driver(const struct bypass_ops *ops, >+ const struct net_device_ops *netdev_ops); >+void bypass_unregister_driver(struct bypass *bypass); >+ >+int bypass_register_instance(struct bypass *bypass, struct net_device *dev); >+int bypass_unregister_instance(struct bypass *bypass, struct net_device *dev); >+ >+int bypass_unregister_child(struct net_device *child_netdev); >+ >+#else >+ >+static inline >+struct bypass *bypass_register_driver(const struct bypass_ops *ops, >+ const struct net_device_ops *netdev_ops) >+{ >+ return NULL; >+} >+ >+static inline void bypass_unregister_driver(struct bypass *bypass) >+{ >+} >+ >+static inline int bypass_register_instance(struct bypass *bypass, >+ struct net_device *dev) >+{ >+ return 0; >+} >+ >+static inline int bypass_unregister_instance(struct bypass *bypass, >+ struct net_device *dev) >+{ >+ return 0; >+} >+ >+static inline int bypass_unregister_child(struct net_device *child_netdev) >+{ >+ return 0; >+} >+ >+#endif >+ >+#endif /* _NET_BYPASS_H */ >diff --git a/net/Kconfig b/net/Kconfig >index 0428f12c25c2..994445f4a96a 100644 >--- a/net/Kconfig >+++ b/net/Kconfig >@@ -423,6 +423,24 @@ config MAY_USE_DEVLINK > on MAY_USE_DEVLINK to ensure they do not cause link errors when > devlink is a loadable module and the driver using it is built-in. > >+config NET_BYPASS >+ tristate "Bypass interface" >+ ---help--- >+ This provides a generic interface for paravirtual drivers to listen >+ for netdev register/unregister/link change events from pci ethernet >+ devices with the same MAC and takeover their datapath. This also >+ enables live migration of a VM with direct attached VF by failing >+ over to the paravirtual datapath when the VF is unplugged. >+ >+config MAY_USE_BYPASS >+ tristate >+ default m if NET_BYPASS=m >+ default y if NET_BYPASS=y || NET_BYPASS=n >+ help >+ Drivers using the bypass infrastructure should have a dependency >+ on MAY_USE_BYPASS to ensure they do not cause link errors when >+ bypass is a loadable module and the driver using it is built-in. >+ > endif # if NET > > # Used by archs to tell that they support BPF JIT compiler plus which flavour. >diff --git a/net/core/Makefile b/net/core/Makefile >index 6dbbba8c57ae..a9727ed1c8fc 100644 >--- a/net/core/Makefile >+++ b/net/core/Makefile >@@ -30,3 +30,4 @@ obj-$(CONFIG_DST_CACHE) += dst_cache.o > obj-$(CONFIG_HWBM) += hwbm.o > obj-$(CONFIG_NET_DEVLINK) += devlink.o > obj-$(CONFIG_GRO_CELLS) += gro_cells.o >+obj-$(CONFIG_NET_BYPASS) += bypass.o >diff --git a/net/core/bypass.c b/net/core/bypass.c >new file mode 100644 >index 000000000000..7bde962ec3d4 >--- /dev/null >+++ b/net/core/bypass.c >@@ -0,0 +1,406 @@ >+// SPDX-License-Identifier: GPL-2.0 >+/* Copyright (c) 2018, Intel Corporation. */ >+ >+/* A common module to handle registrations and notifications for paravirtual >+ * drivers to enable accelerated datapath and support VF live migration. >+ * >+ * The notifier and event handling code is based on netvsc driver. >+ */ >+ >+#include <linux/netdevice.h> >+#include <linux/etherdevice.h> >+#include <linux/ethtool.h> >+#include <linux/module.h> >+#include <linux/slab.h> >+#include <linux/netdevice.h> >+#include <linux/netpoll.h> >+#include <linux/rtnetlink.h> >+#include <linux/if_vlan.h> >+#include <net/sch_generic.h> >+#include <uapi/linux/if_arp.h> >+#include <net/bypass.h> >+ >+static LIST_HEAD(bypass_list); >+ >+static DEFINE_MUTEX(bypass_mutex);Why mutex? Apparently you don't need to sleep while holding a lock. Simple spinlock would do.>+ >+struct bypass_instance *bypass_instance_alloc(struct net_device *dev) >+{ >+ struct bypass_instance *bypass_instance; >+ >+ bypass_instance = kzalloc(sizeof(*bypass_instance), GFP_KERNEL); >+ if (!bypass_instance) >+ return NULL; >+ >+ dev_hold(dev); >+ rcu_assign_pointer(bypass_instance->bypass_netdev, dev); >+ >+ return bypass_instance; >+} >+ >+void bypass_instance_free(struct bypass_instance *bypass_instance) >+{ >+ struct net_device *bypass_netdev; >+ >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ >+ dev_put(bypass_netdev); >+ kfree(bypass_instance); >+} >+ >+static struct bypass_instance *bypass_get_instance_bymac(u8 *mac) >+{ >+ struct bypass_instance *bypass_instance; >+ struct net_device *bypass_netdev; >+ struct bypass *bypass; >+ >+ list_for_each_entry(bypass, &bypass_list, list) { >+ mutex_lock(&bypass->lock); >+ list_for_each_entry(bypass_instance, &bypass->instance_list, >+ list) { >+ bypass_netdev >+ rcu_dereference(bypass_instance->bypass_netdev); >+ if (ether_addr_equal(bypass_netdev->perm_addr, mac)) { >+ mutex_unlock(&bypass->lock); >+ goto out; >+ } >+ } >+ mutex_unlock(&bypass->lock); >+ } >+ >+ bypass_instance = NULL; >+out: >+ return bypass_instance; >+} >+ >+static int bypass_register_child(struct net_device *child_netdev) >+{ >+ struct bypass_instance *bypass_instance; >+ struct bypass *bypass; >+ struct net_device *bypass_netdev; >+ int ret, orig_mtu; >+ >+ ASSERT_RTNL(); >+ >+ mutex_lock(&bypass_mutex); >+ bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr); >+ if (!bypass_instance) { >+ mutex_unlock(&bypass_mutex); >+ goto done; >+ } >+ >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ bypass = bypass_instance->bypass; >+ mutex_unlock(&bypass_mutex); >+ >+ if (!bypass->ops->register_child) >+ goto done; >+ >+ ret = bypass->ops->register_child(bypass_netdev, child_netdev); >+ if (ret != 0) >+ goto done; >+ >+ ret = netdev_rx_handler_register(child_netdev, >+ bypass->ops->handle_frame, >+ bypass_netdev); >+ if (ret != 0) { >+ netdev_err(child_netdev, >+ "can not register bypass rx handler (err = %d)\n", >+ ret); >+ goto rx_handler_failed; >+ } >+ >+ ret = netdev_upper_dev_link(child_netdev, bypass_netdev, NULL); >+ if (ret != 0) { >+ netdev_err(child_netdev,No line-wrap is needed here and in other cases like this.>+ "can not set master device %s (err = %d)\n", >+ bypass_netdev->name, ret); >+ goto upper_link_failed; >+ } >+ >+ child_netdev->flags |= IFF_SLAVE;Don't reuse IFF_SLAVE. That is bonding-specific thing. I know that netvsc uses it, it is wrong. Please rather introduce: IFF_BYPASS for master and IFF_BYPASS_SLAVE for slaves.>+ >+ if (netif_running(bypass_netdev)) { >+ ret = dev_open(child_netdev); >+ if (ret && (ret != -EBUSY)) { >+ netdev_err(bypass_netdev, >+ "Opening child %s failed ret:%d\n", >+ child_netdev->name, ret); >+ goto err_interface_up; >+ } >+ } >+ >+ /* Align MTU of child with master */ >+ orig_mtu = child_netdev->mtu; >+ ret = dev_set_mtu(child_netdev, bypass_netdev->mtu); >+ if (ret != 0) { >+ netdev_err(bypass_netdev, >+ "unable to change mtu of %s to %u register failed\n", >+ child_netdev->name, bypass_netdev->mtu); >+ goto err_set_mtu; >+ } >+ >+ ret = bypass->ops->join_child(bypass_netdev, child_netdev); >+ if (ret != 0) >+ goto err_join; >+ >+ call_netdevice_notifiers(NETDEV_JOIN, child_netdev); >+ >+ goto done; >+ >+err_join: >+ dev_set_mtu(child_netdev, orig_mtu); >+err_set_mtu: >+ dev_close(child_netdev); >+err_interface_up: >+ netdev_upper_dev_unlink(child_netdev, bypass_netdev); >+ child_netdev->flags &= ~IFF_SLAVE; >+upper_link_failed: >+ netdev_rx_handler_unregister(child_netdev); >+rx_handler_failed: >+ bypass->ops->unregister_child(bypass_netdev, child_netdev); >+ >+done: >+ return NOTIFY_DONE; >+} >+ >+int bypass_unregister_child(struct net_device *child_netdev) >+{ >+ struct bypass_instance *bypass_instance; >+ struct net_device *bypass_netdev; >+ struct bypass *bypass; >+ int ret; >+ >+ ASSERT_RTNL(); >+ >+ mutex_lock(&bypass_mutex); >+ bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr); >+ if (!bypass_instance) { >+ mutex_unlock(&bypass_mutex); >+ goto done; >+ } >+ >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ bypass = bypass_instance->bypass; >+ mutex_unlock(&bypass_mutex); >+ >+ ret = bypass->ops->release_child(bypass_netdev, child_netdev); >+ if (ret != 0) >+ goto done; >+ >+ netdev_rx_handler_unregister(child_netdev); >+ netdev_upper_dev_unlink(child_netdev, bypass_netdev); >+ child_netdev->flags &= ~IFF_SLAVE; >+ >+ if (!bypass->ops->unregister_child) >+ goto done; >+ >+ bypass->ops->unregister_child(bypass_netdev, child_netdev); >+ >+done: >+ return NOTIFY_DONE; >+} >+EXPORT_SYMBOL(bypass_unregister_child);Please use "EXPORT_SYMBOL_GPL" for all exported symbols.>+ >+static int bypass_update_link(struct net_device *child_netdev) >+{ >+ struct bypass_instance *bypass_instance; >+ struct net_device *bypass_netdev; >+ struct bypass *bypass; >+ >+ ASSERT_RTNL(); >+ >+ mutex_lock(&bypass_mutex); >+ bypass_instance = bypass_get_instance_bymac(child_netdev->perm_addr);You don't really need this lookup. The kernel knows about the master device, you can just use netdev_master_upper_dev_get_rcu() to get it.>+ if (!bypass_instance) { >+ mutex_unlock(&bypass_mutex); >+ goto done; >+ } >+ >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ bypass = bypass_instance->bypass; >+ mutex_unlock(&bypass_mutex); >+ >+ if (!bypass->ops->update_link) >+ goto done; >+ >+ bypass->ops->update_link(bypass_netdev, child_netdev); >+ >+done: >+ return NOTIFY_DONE; >+} >+ >+static bool bypass_validate_child_dev(struct net_device *dev) >+{ >+ /* Avoid non-Ethernet type devices */ >+ if (dev->type != ARPHRD_ETHER) >+ return false; >+ >+ /* Avoid Vlan dev with same MAC registering as VF */ >+ if (is_vlan_dev(dev)) >+ return false; >+ >+ /* Avoid Bonding master dev with same MAC registering as child dev */ >+ if ((dev->priv_flags & IFF_BONDING) && (dev->flags & IFF_MASTER)) >+ return false; >+ >+ return true; >+} >+ >+static int >+bypass_event(struct notifier_block *this, unsigned long event, void *ptr) >+{ >+ struct net_device *event_dev = netdev_notifier_info_to_dev(ptr); >+ struct bypass *bypass; >+ >+ /* Skip Parent events */ >+ mutex_lock(&bypass_mutex); >+ list_for_each_entry(bypass, &bypass_list, list) { >+ if (event_dev->netdev_ops == bypass->netdev_ops) { >+ mutex_unlock(&bypass_mutex); >+ return NOTIFY_DONE; >+ }What you need instead of this is an identification helper netif_is_bypass_master() similar to netif_is_team_master() netif_is_bridge_master() etc>+ } >+ mutex_unlock(&bypass_mutex); >+ >+ if (!bypass_validate_child_dev(event_dev)) >+ return NOTIFY_DONE; >+ >+ switch (event) { >+ case NETDEV_REGISTER: >+ return bypass_register_child(event_dev); >+ case NETDEV_UNREGISTER: >+ return bypass_unregister_child(event_dev); >+ case NETDEV_UP: >+ case NETDEV_DOWN: >+ case NETDEV_CHANGE: >+ return bypass_update_link(event_dev); >+ default: >+ return NOTIFY_DONE; >+ } >+} >+ >+static struct notifier_block bypass_notifier = { >+ .notifier_call = bypass_event, >+}; >+ >+static void bypass_register_existing_child(struct net_device *bypass_netdev) >+{ >+ struct net *net = dev_net(bypass_netdev); >+ struct net_device *dev; >+ >+ rtnl_lock(); >+ for_each_netdev(net, dev) { >+ if (dev == bypass_netdev) >+ continue; >+ if (!bypass_validate_child_dev(dev)) >+ continue; >+ if (ether_addr_equal(bypass_netdev->perm_addr, dev->perm_addr)) >+ bypass_register_child(dev); >+ } >+ rtnl_unlock(); >+} >+ >+int bypass_register_instance(struct bypass *bypass, struct net_device *dev) >+{ >+ struct bypass_instance *bypass_instance;No need to allocate this instace here. You can just have is embedded inside netdevice priv and pass pointer to it here. You can pass the pointer back to the driver when you call ops as the driver can get priv back by it. I would also call it "struct bypass_master" and this function "bypass_master_register". It should contain the ops pointer too.>+ struct net_device *bypass_netdev; >+ int ret = 0; >+ >+ mutex_lock(&bypass->lock); >+ list_for_each_entry(bypass_instance, &bypass->instance_list, list) { >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ if (bypass_netdev == dev) {This means the driver registered one netdev twice. That is a bug in driver, so WARN_ON would be nice here to point that out.>+ ret = -EEXIST; >+ goto done; >+ } >+ } >+ >+ bypass_instance = bypass_instance_alloc(dev); >+ if (!bypass_instance) { >+ ret = -ENOMEM; >+ goto done; >+ } >+ >+ bypass_instance->bypass = bypass; >+ list_add_tail(&bypass_instance->list, &bypass->instance_list); >+ >+done: >+ mutex_unlock(&bypass->lock); >+ bypass_register_existing_child(dev); >+ return ret; >+} >+EXPORT_SYMBOL(bypass_register_instance); >+ >+int bypass_unregister_instance(struct bypass *bypass, struct net_device *dev) >+{ >+ struct bypass_instance *bypass_instance; >+ struct net_device *bypass_netdev; >+ int ret = 0; >+ >+ mutex_lock(&bypass->lock); >+ list_for_each_entry(bypass_instance, &bypass->instance_list, list) { >+ bypass_netdev = rcu_dereference(bypass_instance->bypass_netdev); >+ if (bypass_netdev == dev) { >+ list_del(&bypass_instance->list); >+ bypass_instance_free(bypass_instance); >+ goto done; >+ } >+ } >+ >+ ret = -ENOENT; >+done: >+ mutex_unlock(&bypass->lock); >+ return ret; >+} >+EXPORT_SYMBOL(bypass_unregister_instance); >+ >+struct bypass *bypass_register_driver(const struct bypass_ops *ops, >+ const struct net_device_ops *netdev_ops)I don't see why you need a list of drivers. What you need is just a list of instances - bypass masters (probably to call them like that in the code as well). Well, you can use the common netdevice list for that purpose with the identification helper I mentioned above. Then you need no lists and no mutexes/spinlocks.>+{ >+ struct bypass *bypass; >+ >+ bypass = kzalloc(sizeof(*bypass), GFP_KERNEL); >+ if (!bypass) >+ return NULL; >+ >+ bypass->ops = ops; >+ bypass->netdev_ops = netdev_ops; >+ INIT_LIST_HEAD(&bypass->instance_list); >+ >+ mutex_lock(&bypass_mutex); >+ list_add_tail(&bypass->list, &bypass_list); >+ mutex_unlock(&bypass_mutex); >+ >+ return bypass; >+} >+EXPORT_SYMBOL_GPL(bypass_register_driver); >+ >+void bypass_unregister_driver(struct bypass *bypass) >+{ >+ mutex_lock(&bypass_mutex); >+ list_del(&bypass->list); >+ mutex_unlock(&bypass_mutex); >+ >+ kfree(bypass); >+} >+EXPORT_SYMBOL_GPL(bypass_unregister_driver); >+ >+static __init int >+bypass_init(void) >+{ >+ register_netdevice_notifier(&bypass_notifier); >+ >+ return 0; >+} >+module_init(bypass_init); >+ >+static __exit >+void bypass_exit(void) >+{ >+ unregister_netdevice_notifier(&bypass_notifier); >+} >+module_exit(bypass_exit); >+ >+MODULE_DESCRIPTION("Bypass infrastructure/interface for Paravirtual drivers"); >+MODULE_LICENSE("GPL v2"); >-- >2.14.3 >
Reasonably Related Threads
- [RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
- [RFC PATCH v3 2/3] virtio_net: Extend virtio to use VF datapath when available
- [RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
- [RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
- [RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available