Considering the complexity of virtio-net.c and the new features we want to add, it is time to split virtio-net.c into multiple independent module files. This is beneficial to the maintenance and adding new functions. And AF_XDP support will be added later, then a separate xsk.c file will be added. This patchset split virtio-net.c into these parts: * virtnet.c: virtio net device ops (napi, tx, rx, device ops, ...) * virtnet_common.c: virtio net common code * virtnet_ethtool.c: virtio net ethtool callbacks * virtnet_ctrl.c: virtio net ctrl queue command APIs * virtnet_virtio.c: virtio net virtio callbacks/ops (driver register, virtio probe, virtio free, ...) Please review. Thanks. Xuan Zhuo (16): virtio_net: add a separate directory for virtio-net virtio_net: move struct to header file virtio_net: add prefix to the struct inside header file virtio_net: separating cpu-related funs virtio_net: separate virtnet_ctrl_set_queues() virtio_net: separate virtnet_ctrl_set_mac_address() virtio_net: remove lock from virtnet_ack_link_announce() virtio_net: separating the APIs of cq virtio_net: introduce virtnet_rq_update_stats() virtio_net: separating the funcs of ethtool virtio_net: introduce virtnet_dev_rx_queue_group() virtio_net: introduce virtnet_get_netdev() virtio_net: prepare for virtio virtio_net: move virtnet_[en/dis]able_delayed_refill to header file virtio_net: add APIs to register/unregister virtio driver virtio_net: separating the virtio code MAINTAINERS | 2 +- drivers/net/Kconfig | 8 +- drivers/net/Makefile | 2 +- drivers/net/virtio/Kconfig | 11 + drivers/net/virtio/Makefile | 10 + .../net/{virtio_net.c => virtio/virtnet.c} | 2368 ++--------------- drivers/net/virtio/virtnet.h | 213 ++ drivers/net/virtio/virtnet_common.c | 138 + drivers/net/virtio/virtnet_common.h | 14 + drivers/net/virtio/virtnet_ctrl.c | 272 ++ drivers/net/virtio/virtnet_ctrl.h | 45 + drivers/net/virtio/virtnet_ethtool.c | 578 ++++ drivers/net/virtio/virtnet_ethtool.h | 8 + drivers/net/virtio/virtnet_virtio.c | 880 ++++++ drivers/net/virtio/virtnet_virtio.h | 8 + 15 files changed, 2366 insertions(+), 2191 deletions(-) create mode 100644 drivers/net/virtio/Kconfig create mode 100644 drivers/net/virtio/Makefile rename drivers/net/{virtio_net.c => virtio/virtnet.c} (50%) create mode 100644 drivers/net/virtio/virtnet.h create mode 100644 drivers/net/virtio/virtnet_common.c create mode 100644 drivers/net/virtio/virtnet_common.h create mode 100644 drivers/net/virtio/virtnet_ctrl.c create mode 100644 drivers/net/virtio/virtnet_ctrl.h create mode 100644 drivers/net/virtio/virtnet_ethtool.c create mode 100644 drivers/net/virtio/virtnet_ethtool.h create mode 100644 drivers/net/virtio/virtnet_virtio.c create mode 100644 drivers/net/virtio/virtnet_virtio.h -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 01/16] virtio_net: add a separate directory for virtio-net
Create a separate directory for virtio-net. Considering the complexity of virtio-net.c and the new features we want to add, it is time to split virtio-net.c into multiple independent module files. This is beneficial to the maintenance and adding new functions. And AF_XDP support will be added later, then a separate xsk.c file will be added. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- MAINTAINERS | 2 +- drivers/net/Kconfig | 8 +------- drivers/net/Makefile | 2 +- drivers/net/virtio/Kconfig | 11 +++++++++++ drivers/net/virtio/Makefile | 8 ++++++++ drivers/net/{virtio_net.c => virtio/virtnet.c} | 0 6 files changed, 22 insertions(+), 9 deletions(-) create mode 100644 drivers/net/virtio/Kconfig create mode 100644 drivers/net/virtio/Makefile rename drivers/net/{virtio_net.c => virtio/virtnet.c} (100%) diff --git a/MAINTAINERS b/MAINTAINERS index fbbda4671e73..6bb3c2199003 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22083,7 +22083,7 @@ F: Documentation/devicetree/bindings/virtio/ F: Documentation/driver-api/virtio/ F: drivers/block/virtio_blk.c F: drivers/crypto/virtio/ -F: drivers/net/virtio_net.c +F: drivers/net/virtio/ F: drivers/vdpa/ F: drivers/virtio/ F: include/linux/vdpa.h diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index c34bd432da27..23a169d248b5 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -407,13 +407,7 @@ config VETH When one end receives the packet it appears on its pair and vice versa. -config VIRTIO_NET - tristate "Virtio network driver" - depends on VIRTIO - select NET_FAILOVER - help - This is the virtual network driver for virtio. It can be used with - QEMU based VMMs (like KVM or Xen). Say Y or M. +source "drivers/net/virtio/Kconfig" config NLMON tristate "Virtual netlink monitoring device" diff --git a/drivers/net/Makefile b/drivers/net/Makefile index e26f98f897c5..47537dd0f120 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -31,7 +31,7 @@ obj-$(CONFIG_NET_TEAM) += team/ obj-$(CONFIG_TUN) += tun.o obj-$(CONFIG_TAP) += tap.o obj-$(CONFIG_VETH) += veth.o -obj-$(CONFIG_VIRTIO_NET) += virtio_net.o +obj-$(CONFIG_VIRTIO_NET) += virtio/ obj-$(CONFIG_VXLAN) += vxlan/ obj-$(CONFIG_GENEVE) += geneve.o obj-$(CONFIG_BAREUDP) += bareudp.o diff --git a/drivers/net/virtio/Kconfig b/drivers/net/virtio/Kconfig new file mode 100644 index 000000000000..9bc2a2fc6c3e --- /dev/null +++ b/drivers/net/virtio/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# virtio-net device configuration +# +config VIRTIO_NET + tristate "Virtio network driver" + depends on VIRTIO + select NET_FAILOVER + help + This is the virtual network driver for virtio. It can be used with + QEMU based VMMs (like KVM or Xen). Say Y or M. diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile new file mode 100644 index 000000000000..ccd45c0e5064 --- /dev/null +++ b/drivers/net/virtio/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the virtio network device drivers. +# + +obj-$(CONFIG_VIRTIO_NET) += virtio_net.o + +virtio_net-y := virtnet.o diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio/virtnet.c similarity index 100% rename from drivers/net/virtio_net.c rename to drivers/net/virtio/virtnet.c -- 2.32.0.3.g01195cf9f
Moving some structures and macros to header file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 181 +--------------------------------- drivers/net/virtio/virtnet.h | 184 +++++++++++++++++++++++++++++++++++ 2 files changed, 186 insertions(+), 179 deletions(-) create mode 100644 drivers/net/virtio/virtnet.h diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index e2560b6f7980..5ca354e29483 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -6,7 +6,6 @@ //#define DEBUG #include <linux/netdevice.h> #include <linux/etherdevice.h> -#include <linux/ethtool.h> #include <linux/module.h> #include <linux/virtio.h> #include <linux/virtio_net.h> @@ -16,13 +15,14 @@ #include <linux/if_vlan.h> #include <linux/slab.h> #include <linux/cpu.h> -#include <linux/average.h> #include <linux/filter.h> #include <linux/kernel.h> #include <net/route.h> #include <net/xdp.h> #include <net/net_failover.h> +#include "virtnet.h" + static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -35,26 +35,6 @@ module_param(napi_tx, bool, 0644); #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN) #define GOOD_COPY_LEN 128 -#define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) - -/* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */ -#define VIRTIO_XDP_HEADROOM 256 - -/* Separating two types of XDP xmit */ -#define VIRTIO_XDP_TX BIT(0) -#define VIRTIO_XDP_REDIR BIT(1) - -#define VIRTIO_XDP_FLAG BIT(0) - -/* RX packet size EWMA. The average packet size is used to determine the packet - * buffer size when refilling RX rings. As the entire RX ring may be refilled - * at once, the weight is chosen so that the EWMA will be insensitive to short- - * term, transient changes in packet size. - */ -DECLARE_EWMA(pkt_len, 0, 64) - -#define VIRTNET_DRIVER_VERSION "1.0.0" - static const unsigned long guest_offloads[] = { VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, @@ -78,28 +58,6 @@ struct virtnet_stat_desc { size_t offset; }; -struct virtnet_sq_stats { - struct u64_stats_sync syncp; - u64 packets; - u64 bytes; - u64 xdp_tx; - u64 xdp_tx_drops; - u64 kicks; - u64 tx_timeouts; -}; - -struct virtnet_rq_stats { - struct u64_stats_sync syncp; - u64 packets; - u64 bytes; - u64 drops; - u64 xdp_packets; - u64 xdp_tx; - u64 xdp_redirects; - u64 xdp_drops; - u64 kicks; -}; - #define VIRTNET_SQ_STAT(m) offsetof(struct virtnet_sq_stats, m) #define VIRTNET_RQ_STAT(m) offsetof(struct virtnet_rq_stats, m) @@ -126,57 +84,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { #define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) #define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) -/* Internal representation of a send virtqueue */ -struct send_queue { - /* Virtqueue associated with this send _queue */ - struct virtqueue *vq; - - /* TX: fragments + linear part + virtio header */ - struct scatterlist sg[MAX_SKB_FRAGS + 2]; - - /* Name of the send queue: output.$index */ - char name[16]; - - struct virtnet_sq_stats stats; - - struct napi_struct napi; - - /* Record whether sq is in reset state. */ - bool reset; -}; - -/* Internal representation of a receive virtqueue */ -struct receive_queue { - /* Virtqueue associated with this receive_queue */ - struct virtqueue *vq; - - struct napi_struct napi; - - struct bpf_prog __rcu *xdp_prog; - - struct virtnet_rq_stats stats; - - /* Chain pages by the private ptr. */ - struct page *pages; - - /* Average packet length for mergeable receive buffers. */ - struct ewma_pkt_len mrg_avg_pkt_len; - - /* Page frag for packet buffer allocation. */ - struct page_frag alloc_frag; - - /* RX: fragments + linear part + virtio header */ - struct scatterlist sg[MAX_SKB_FRAGS + 2]; - - /* Min single buffer size for mergeable buffers case. */ - unsigned int min_buf_len; - - /* Name of this receive queue: input.$index */ - char name[16]; - - struct xdp_rxq_info xdp_rxq; -}; - /* This structure can contain rss message with maximum settings for indirection table and keysize * Note, that default structure that describes RSS configuration virtio_net_rss_config * contains same info but can't handle table values. @@ -207,90 +114,6 @@ struct control_buf { struct virtio_net_ctrl_rss rss; }; -struct virtnet_info { - struct virtio_device *vdev; - struct virtqueue *cvq; - struct net_device *dev; - struct send_queue *sq; - struct receive_queue *rq; - unsigned int status; - - /* Max # of queue pairs supported by the device */ - u16 max_queue_pairs; - - /* # of queue pairs currently used by the driver */ - u16 curr_queue_pairs; - - /* # of XDP queue pairs currently used by the driver */ - u16 xdp_queue_pairs; - - /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */ - bool xdp_enabled; - - /* I like... big packets and I cannot lie! */ - bool big_packets; - - /* number of sg entries allocated for big packets */ - unsigned int big_packets_num_skbfrags; - - /* Host will merge rx buffers for big packets (shake it! shake it!) */ - bool mergeable_rx_bufs; - - /* Host supports rss and/or hash report */ - bool has_rss; - bool has_rss_hash_report; - u8 rss_key_size; - u16 rss_indir_table_size; - u32 rss_hash_types_supported; - u32 rss_hash_types_saved; - - /* Has control virtqueue */ - bool has_cvq; - - /* Host can handle any s/g split between our header and packet data */ - bool any_header_sg; - - /* Packet virtio header size */ - u8 hdr_len; - - /* Work struct for delayed refilling if we run low on memory. */ - struct delayed_work refill; - - /* Is delayed refill enabled? */ - bool refill_enabled; - - /* The lock to synchronize the access to refill_enabled */ - spinlock_t refill_lock; - - /* Work struct for config space updates */ - struct work_struct config_work; - - /* Does the affinity hint is set for virtqueues? */ - bool affinity_hint_set; - - /* CPU hotplug instances for online & dead */ - struct hlist_node node; - struct hlist_node node_dead; - - struct control_buf *ctrl; - - /* Ethtool settings */ - u8 duplex; - u32 speed; - - /* Interrupt coalescing settings */ - u32 tx_usecs; - u32 rx_usecs; - u32 tx_max_packets; - u32 rx_max_packets; - - unsigned long guest_offloads; - unsigned long guest_offloads_capable; - - /* failover when STANDBY feature enabled */ - struct failover *failover; -}; - struct padded_vnet_hdr { struct virtio_net_hdr_v1_hash hdr; /* diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h new file mode 100644 index 000000000000..778a0e6af869 --- /dev/null +++ b/drivers/net/virtio/virtnet.h @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __VIRTNET_H__ +#define __VIRTNET_H__ + +#include <linux/ethtool.h> +#include <linux/average.h> + +#define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) + +/* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */ +#define VIRTIO_XDP_HEADROOM 256 + +/* Separating two types of XDP xmit */ +#define VIRTIO_XDP_TX BIT(0) +#define VIRTIO_XDP_REDIR BIT(1) + +#define VIRTIO_XDP_FLAG BIT(0) + +/* RX packet size EWMA. The average packet size is used to determine the packet + * buffer size when refilling RX rings. As the entire RX ring may be refilled + * at once, the weight is chosen so that the EWMA will be insensitive to short- + * term, transient changes in packet size. + */ +DECLARE_EWMA(pkt_len, 0, 64) + +#define VIRTNET_DRIVER_VERSION "1.0.0" + +struct virtnet_sq_stats { + struct u64_stats_sync syncp; + u64 packets; + u64 bytes; + u64 xdp_tx; + u64 xdp_tx_drops; + u64 kicks; + u64 tx_timeouts; +}; + +struct virtnet_rq_stats { + struct u64_stats_sync syncp; + u64 packets; + u64 bytes; + u64 drops; + u64 xdp_packets; + u64 xdp_tx; + u64 xdp_redirects; + u64 xdp_drops; + u64 kicks; +}; + +struct send_queue { + /* Virtqueue associated with this send _queue */ + struct virtqueue *vq; + + /* TX: fragments + linear part + virtio header */ + struct scatterlist sg[MAX_SKB_FRAGS + 2]; + + /* Name of the send queue: output.$index */ + char name[16]; + + struct virtnet_sq_stats stats; + + struct napi_struct napi; + + /* Record whether sq is in reset state. */ + bool reset; +}; + +struct receive_queue { + /* Virtqueue associated with this receive_queue */ + struct virtqueue *vq; + + struct napi_struct napi; + + struct bpf_prog __rcu *xdp_prog; + + struct virtnet_rq_stats stats; + + /* Chain pages by the private ptr. */ + struct page *pages; + + /* Average packet length for mergeable receive buffers. */ + struct ewma_pkt_len mrg_avg_pkt_len; + + /* Page frag for packet buffer allocation. */ + struct page_frag alloc_frag; + + /* RX: fragments + linear part + virtio header */ + struct scatterlist sg[MAX_SKB_FRAGS + 2]; + + /* Min single buffer size for mergeable buffers case. */ + unsigned int min_buf_len; + + /* Name of this receive queue: input.$index */ + char name[16]; + + struct xdp_rxq_info xdp_rxq; +}; + +struct virtnet_info { + struct virtio_device *vdev; + struct virtqueue *cvq; + struct net_device *dev; + struct send_queue *sq; + struct receive_queue *rq; + unsigned int status; + + /* Max # of queue pairs supported by the device */ + u16 max_queue_pairs; + + /* # of queue pairs currently used by the driver */ + u16 curr_queue_pairs; + + /* # of XDP queue pairs currently used by the driver */ + u16 xdp_queue_pairs; + + /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */ + bool xdp_enabled; + + /* I like... big packets and I cannot lie! */ + bool big_packets; + + /* number of sg entries allocated for big packets */ + unsigned int big_packets_num_skbfrags; + + /* Host will merge rx buffers for big packets (shake it! shake it!) */ + bool mergeable_rx_bufs; + + /* Host supports rss and/or hash report */ + bool has_rss; + bool has_rss_hash_report; + u8 rss_key_size; + u16 rss_indir_table_size; + u32 rss_hash_types_supported; + u32 rss_hash_types_saved; + + /* Has control virtqueue */ + bool has_cvq; + + /* Host can handle any s/g split between our header and packet data */ + bool any_header_sg; + + /* Packet virtio header size */ + u8 hdr_len; + + /* Work struct for delayed refilling if we run low on memory. */ + struct delayed_work refill; + + /* Is delayed refill enabled? */ + bool refill_enabled; + + /* The lock to synchronize the access to refill_enabled */ + spinlock_t refill_lock; + + /* Work struct for config space updates */ + struct work_struct config_work; + + /* Does the affinity hint is set for virtqueues? */ + bool affinity_hint_set; + + /* CPU hotplug instances for online & dead */ + struct hlist_node node; + struct hlist_node node_dead; + + struct control_buf *ctrl; + + /* Ethtool settings */ + u8 duplex; + u32 speed; + + /* Interrupt coalescing settings */ + u32 tx_usecs; + u32 rx_usecs; + u32 tx_max_packets; + u32 rx_max_packets; + + unsigned long guest_offloads; + unsigned long guest_offloads_capable; + + /* failover when STANDBY feature enabled */ + struct failover *failover; +}; + +#endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 03/16] virtio_net: add prefix to the struct inside header file
We moved some structures to the header file, but these structures do not prefixed with virtnet. This patch adds virtnet for these. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 78 ++++++++++++++++++------------------ drivers/net/virtio/virtnet.h | 12 +++--- 2 files changed, 45 insertions(+), 45 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 5ca354e29483..92ef95c163b6 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -174,7 +174,7 @@ static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb) * private is used to chain pages for big packets, put the whole * most recent used list in the beginning for reuse */ -static void give_pages(struct receive_queue *rq, struct page *page) +static void give_pages(struct virtnet_rq *rq, struct page *page) { struct page *end; @@ -184,7 +184,7 @@ static void give_pages(struct receive_queue *rq, struct page *page) rq->pages = page; } -static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) +static struct page *get_a_page(struct virtnet_rq *rq, gfp_t gfp_mask) { struct page *p = rq->pages; @@ -268,7 +268,7 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx) /* Called from bottom half context */ static struct sk_buff *page_to_skb(struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct page *page, unsigned int offset, unsigned int len, unsigned int truesize, unsigned int headroom) @@ -370,7 +370,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, return skb; } -static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi) +static void free_old_xmit_skbs(struct virtnet_sq *sq, bool in_napi) { unsigned int len; unsigned int packets = 0; @@ -418,7 +418,7 @@ static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) static void check_sq_full_and_disable(struct virtnet_info *vi, struct net_device *dev, - struct send_queue *sq) + struct virtnet_sq *sq) { bool use_napi = sq->napi.weight; int qnum; @@ -452,7 +452,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, } static int __virtnet_xdp_xmit_one(struct virtnet_info *vi, - struct send_queue *sq, + struct virtnet_sq *sq, struct xdp_frame *xdpf) { struct virtio_net_hdr_mrg_rxbuf *hdr; @@ -541,9 +541,9 @@ static int virtnet_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags) { struct virtnet_info *vi = netdev_priv(dev); - struct receive_queue *rq = vi->rq; + struct virtnet_rq *rq = vi->rq; struct bpf_prog *xdp_prog; - struct send_queue *sq; + struct virtnet_sq *sq; unsigned int len; int packets = 0; int bytes = 0; @@ -631,7 +631,7 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi) * across multiple buffers (num_buf > 1), and we make sure buffers * have enough headroom. */ -static struct page *xdp_linearize_page(struct receive_queue *rq, +static struct page *xdp_linearize_page(struct virtnet_rq *rq, int *num_buf, struct page *p, int offset, @@ -683,7 +683,7 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, static struct sk_buff *receive_small(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, void *ctx, unsigned int len, unsigned int *xdp_xmit, @@ -827,7 +827,7 @@ static struct sk_buff *receive_small(struct net_device *dev, static struct sk_buff *receive_big(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, unsigned int len, struct virtnet_rq_stats *stats) @@ -900,7 +900,7 @@ static struct sk_buff *build_skb_from_xdp_buff(struct net_device *dev, /* TODO: build xdp in big mode */ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, struct xdp_buff *xdp, void *buf, unsigned int len, @@ -987,7 +987,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev, static struct sk_buff *receive_mergeable(struct net_device *dev, struct virtnet_info *vi, - struct receive_queue *rq, + struct virtnet_rq *rq, void *buf, void *ctx, unsigned int len, @@ -1278,7 +1278,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type); } -static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, +static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq, void *buf, unsigned int len, void **ctx, unsigned int *xdp_xmit, struct virtnet_rq_stats *stats) @@ -1338,7 +1338,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, * not need to use mergeable_len_to_ctx here - it is enough * to store the headroom as the context ignoring the truesize. */ -static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, +static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { struct page_frag *alloc_frag = &rq->alloc_frag; @@ -1364,7 +1364,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq, return err; } -static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, +static int add_recvbuf_big(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { struct page *first, *list = NULL; @@ -1413,7 +1413,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq, return err; } -static unsigned int get_mergeable_buf_len(struct receive_queue *rq, +static unsigned int get_mergeable_buf_len(struct virtnet_rq *rq, struct ewma_pkt_len *avg_pkt_len, unsigned int room) { @@ -1431,7 +1431,7 @@ static unsigned int get_mergeable_buf_len(struct receive_queue *rq, } static int add_recvbuf_mergeable(struct virtnet_info *vi, - struct receive_queue *rq, gfp_t gfp) + struct virtnet_rq *rq, gfp_t gfp) { struct page_frag *alloc_frag = &rq->alloc_frag; unsigned int headroom = virtnet_get_headroom(vi); @@ -1483,7 +1483,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, * before we're receiving packets, or from refill_work which is * careful to disable receiving (using napi_disable). */ -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, +static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, gfp_t gfp) { int err; @@ -1515,7 +1515,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, static void skb_recv_done(struct virtqueue *rvq) { struct virtnet_info *vi = rvq->vdev->priv; - struct receive_queue *rq = &vi->rq[vq2rxq(rvq)]; + struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)]; virtqueue_napi_schedule(&rq->napi, rvq); } @@ -1565,7 +1565,7 @@ static void refill_work(struct work_struct *work) int i; for (i = 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq = &vi->rq[i]; + struct virtnet_rq *rq = &vi->rq[i]; napi_disable(&rq->napi); still_empty = !try_fill_recv(vi, rq, GFP_KERNEL); @@ -1579,7 +1579,7 @@ static void refill_work(struct work_struct *work) } } -static int virtnet_receive(struct receive_queue *rq, int budget, +static int virtnet_receive(struct virtnet_rq *rq, int budget, unsigned int *xdp_xmit) { struct virtnet_info *vi = rq->vq->vdev->priv; @@ -1626,11 +1626,11 @@ static int virtnet_receive(struct receive_queue *rq, int budget, return stats.packets; } -static void virtnet_poll_cleantx(struct receive_queue *rq) +static void virtnet_poll_cleantx(struct virtnet_rq *rq) { struct virtnet_info *vi = rq->vq->vdev->priv; unsigned int index = vq2rxq(rq->vq); - struct send_queue *sq = &vi->sq[index]; + struct virtnet_sq *sq = &vi->sq[index]; struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index); if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index)) @@ -1656,10 +1656,10 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) static int virtnet_poll(struct napi_struct *napi, int budget) { - struct receive_queue *rq - container_of(napi, struct receive_queue, napi); + struct virtnet_rq *rq + container_of(napi, struct virtnet_rq, napi); struct virtnet_info *vi = rq->vq->vdev->priv; - struct send_queue *sq; + struct virtnet_sq *sq; unsigned int received; unsigned int xdp_xmit = 0; @@ -1720,7 +1720,7 @@ static int virtnet_open(struct net_device *dev) static int virtnet_poll_tx(struct napi_struct *napi, int budget) { - struct send_queue *sq = container_of(napi, struct send_queue, napi); + struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi); struct virtnet_info *vi = sq->vq->vdev->priv; unsigned int index = vq2txq(sq->vq); struct netdev_queue *txq; @@ -1764,7 +1764,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) return 0; } -static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) +static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb) { struct virtio_net_hdr_mrg_rxbuf *hdr; const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest; @@ -1815,7 +1815,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); int qnum = skb_get_queue_mapping(skb); - struct send_queue *sq = &vi->sq[qnum]; + struct virtnet_sq *sq = &vi->sq[qnum]; int err; struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum); bool kick = !netdev_xmit_more(); @@ -1869,7 +1869,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) } static int virtnet_rx_resize(struct virtnet_info *vi, - struct receive_queue *rq, u32 ring_num) + struct virtnet_rq *rq, u32 ring_num) { bool running = netif_running(vi->dev); int err, qindex; @@ -1892,7 +1892,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi, } static int virtnet_tx_resize(struct virtnet_info *vi, - struct send_queue *sq, u32 ring_num) + struct virtnet_sq *sq, u32 ring_num) { bool running = netif_running(vi->dev); struct netdev_queue *txq; @@ -2038,8 +2038,8 @@ static void virtnet_stats(struct net_device *dev, for (i = 0; i < vi->max_queue_pairs; i++) { u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops; - struct receive_queue *rq = &vi->rq[i]; - struct send_queue *sq = &vi->sq[i]; + struct virtnet_rq *rq = &vi->rq[i]; + struct virtnet_sq *sq = &vi->sq[i]; do { start = u64_stats_fetch_begin(&sq->stats.syncp); @@ -2355,8 +2355,8 @@ static int virtnet_set_ringparam(struct net_device *dev, { struct virtnet_info *vi = netdev_priv(dev); u32 rx_pending, tx_pending; - struct receive_queue *rq; - struct send_queue *sq; + struct virtnet_rq *rq; + struct virtnet_sq *sq; int i, err; if (ring->rx_mini_pending || ring->rx_jumbo_pending) @@ -2660,7 +2660,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev, size_t offset; for (i = 0; i < vi->curr_queue_pairs; i++) { - struct receive_queue *rq = &vi->rq[i]; + struct virtnet_rq *rq = &vi->rq[i]; stats_base = (u8 *)&rq->stats; do { @@ -2674,7 +2674,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev, } for (i = 0; i < vi->curr_queue_pairs; i++) { - struct send_queue *sq = &vi->sq[i]; + struct virtnet_sq *sq = &vi->sq[i]; stats_base = (u8 *)&sq->stats; do { @@ -3229,7 +3229,7 @@ static int virtnet_set_features(struct net_device *dev, static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct virtnet_info *priv = netdev_priv(dev); - struct send_queue *sq = &priv->sq[txqueue]; + struct virtnet_sq *sq = &priv->sq[txqueue]; struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue); u64_stats_update_begin(&sq->stats.syncp); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 778a0e6af869..669e0499f340 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -48,8 +48,8 @@ struct virtnet_rq_stats { u64 kicks; }; -struct send_queue { - /* Virtqueue associated with this send _queue */ +struct virtnet_sq { + /* Virtqueue associated with this virtnet_sq */ struct virtqueue *vq; /* TX: fragments + linear part + virtio header */ @@ -66,8 +66,8 @@ struct send_queue { bool reset; }; -struct receive_queue { - /* Virtqueue associated with this receive_queue */ +struct virtnet_rq { + /* Virtqueue associated with this virtnet_rq */ struct virtqueue *vq; struct napi_struct napi; @@ -101,8 +101,8 @@ struct virtnet_info { struct virtio_device *vdev; struct virtqueue *cvq; struct net_device *dev; - struct send_queue *sq; - struct receive_queue *rq; + struct virtnet_sq *sq; + struct virtnet_rq *rq; unsigned int status; /* Max # of queue pairs supported by the device */ -- 2.32.0.3.g01195cf9f
Add a file virtnet_common.c to save the common funcs. This patch moves the cpu-related funs into it. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/Makefile | 2 +- drivers/net/virtio/virtnet.c | 132 ++------------------------ drivers/net/virtio/virtnet_common.c | 138 ++++++++++++++++++++++++++++ drivers/net/virtio/virtnet_common.h | 14 +++ 4 files changed, 163 insertions(+), 123 deletions(-) create mode 100644 drivers/net/virtio/virtnet_common.c create mode 100644 drivers/net/virtio/virtnet_common.h diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index ccd45c0e5064..3bef2b51876c 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -5,4 +5,4 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o -virtio_net-y := virtnet.o +virtio_net-y := virtnet.o virtnet_common.o diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 92ef95c163b6..3fcf70782d97 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -14,7 +14,6 @@ #include <linux/scatterlist.h> #include <linux/if_vlan.h> #include <linux/slab.h> -#include <linux/cpu.h> #include <linux/filter.h> #include <linux/kernel.h> #include <net/route.h> @@ -22,6 +21,7 @@ #include <net/net_failover.h> #include "virtnet.h" +#include "virtnet_common.h" static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -2233,108 +2233,6 @@ static int virtnet_vlan_rx_kill_vid(struct net_device *dev, return 0; } -static void virtnet_clean_affinity(struct virtnet_info *vi) -{ - int i; - - if (vi->affinity_hint_set) { - for (i = 0; i < vi->max_queue_pairs; i++) { - virtqueue_set_affinity(vi->rq[i].vq, NULL); - virtqueue_set_affinity(vi->sq[i].vq, NULL); - } - - vi->affinity_hint_set = false; - } -} - -static void virtnet_set_affinity(struct virtnet_info *vi) -{ - cpumask_var_t mask; - int stragglers; - int group_size; - int i, j, cpu; - int num_cpu; - int stride; - - if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) { - virtnet_clean_affinity(vi); - return; - } - - num_cpu = num_online_cpus(); - stride = max_t(int, num_cpu / vi->curr_queue_pairs, 1); - stragglers = num_cpu >= vi->curr_queue_pairs ? - num_cpu % vi->curr_queue_pairs : - 0; - cpu = cpumask_first(cpu_online_mask); - - for (i = 0; i < vi->curr_queue_pairs; i++) { - group_size = stride + (i < stragglers ? 1 : 0); - - for (j = 0; j < group_size; j++) { - cpumask_set_cpu(cpu, mask); - cpu = cpumask_next_wrap(cpu, cpu_online_mask, - nr_cpu_ids, false); - } - virtqueue_set_affinity(vi->rq[i].vq, mask); - virtqueue_set_affinity(vi->sq[i].vq, mask); - __netif_set_xps_queue(vi->dev, cpumask_bits(mask), i, XPS_CPUS); - cpumask_clear(mask); - } - - vi->affinity_hint_set = true; - free_cpumask_var(mask); -} - -static int virtnet_cpu_online(unsigned int cpu, struct hlist_node *node) -{ - struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, - node); - virtnet_set_affinity(vi); - return 0; -} - -static int virtnet_cpu_dead(unsigned int cpu, struct hlist_node *node) -{ - struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, - node_dead); - virtnet_set_affinity(vi); - return 0; -} - -static int virtnet_cpu_down_prep(unsigned int cpu, struct hlist_node *node) -{ - struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, - node); - - virtnet_clean_affinity(vi); - return 0; -} - -static enum cpuhp_state virtionet_online; - -static int virtnet_cpu_notif_add(struct virtnet_info *vi) -{ - int ret; - - ret = cpuhp_state_add_instance_nocalls(virtionet_online, &vi->node); - if (ret) - return ret; - ret = cpuhp_state_add_instance_nocalls(CPUHP_VIRT_NET_DEAD, - &vi->node_dead); - if (!ret) - return ret; - cpuhp_state_remove_instance_nocalls(virtionet_online, &vi->node); - return ret; -} - -static void virtnet_cpu_notif_remove(struct virtnet_info *vi) -{ - cpuhp_state_remove_instance_nocalls(virtionet_online, &vi->node); - cpuhp_state_remove_instance_nocalls(CPUHP_VIRT_NET_DEAD, - &vi->node_dead); -} - static void virtnet_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring, struct kernel_ethtool_ringparam *kernel_ring, @@ -4091,34 +3989,24 @@ static __init int virtio_net_driver_init(void) { int ret; - ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "virtio/net:online", - virtnet_cpu_online, - virtnet_cpu_down_prep); - if (ret < 0) - goto out; - virtionet_online = ret; - ret = cpuhp_setup_state_multi(CPUHP_VIRT_NET_DEAD, "virtio/net:dead", - NULL, virtnet_cpu_dead); + ret = virtnet_cpuhp_setup(); if (ret) - goto err_dead; + return ret; + ret = register_virtio_driver(&virtio_net_driver); - if (ret) - goto err_virtio; + if (ret) { + virtnet_cpuhp_remove(); + return ret; + } + return 0; -err_virtio: - cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); -err_dead: - cpuhp_remove_multi_state(virtionet_online); -out: - return ret; } module_init(virtio_net_driver_init); static __exit void virtio_net_driver_exit(void) { unregister_virtio_driver(&virtio_net_driver); - cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); - cpuhp_remove_multi_state(virtionet_online); + virtnet_cpuhp_remove(); } module_exit(virtio_net_driver_exit); diff --git a/drivers/net/virtio/virtnet_common.c b/drivers/net/virtio/virtnet_common.c new file mode 100644 index 000000000000..bf0bac0b8704 --- /dev/null +++ b/drivers/net/virtio/virtnet_common.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +// +#include <linux/cpu.h> +#include <linux/netdevice.h> +#include <linux/virtio.h> +#include <linux/virtio_net.h> + +#include "virtnet.h" +#include "virtnet_common.h" + +void virtnet_clean_affinity(struct virtnet_info *vi) +{ + int i; + + if (vi->affinity_hint_set) { + for (i = 0; i < vi->max_queue_pairs; i++) { + virtqueue_set_affinity(vi->rq[i].vq, NULL); + virtqueue_set_affinity(vi->sq[i].vq, NULL); + } + + vi->affinity_hint_set = false; + } +} + +void virtnet_set_affinity(struct virtnet_info *vi) +{ + cpumask_var_t mask; + int stragglers; + int group_size; + int i, j, cpu; + int num_cpu; + int stride; + + if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) { + virtnet_clean_affinity(vi); + return; + } + + num_cpu = num_online_cpus(); + stride = max_t(int, num_cpu / vi->curr_queue_pairs, 1); + stragglers = num_cpu >= vi->curr_queue_pairs ? + num_cpu % vi->curr_queue_pairs : + 0; + cpu = cpumask_first(cpu_online_mask); + + for (i = 0; i < vi->curr_queue_pairs; i++) { + group_size = stride + (i < stragglers ? 1 : 0); + + for (j = 0; j < group_size; j++) { + cpumask_set_cpu(cpu, mask); + cpu = cpumask_next_wrap(cpu, cpu_online_mask, + nr_cpu_ids, false); + } + virtqueue_set_affinity(vi->rq[i].vq, mask); + virtqueue_set_affinity(vi->sq[i].vq, mask); + __netif_set_xps_queue(vi->dev, cpumask_bits(mask), i, XPS_CPUS); + cpumask_clear(mask); + } + + vi->affinity_hint_set = true; + free_cpumask_var(mask); +} + +static int virtnet_cpu_online(unsigned int cpu, struct hlist_node *node) +{ + struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, + node); + virtnet_set_affinity(vi); + return 0; +} + +static int virtnet_cpu_dead(unsigned int cpu, struct hlist_node *node) +{ + struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, + node_dead); + virtnet_set_affinity(vi); + return 0; +} + +static int virtnet_cpu_down_prep(unsigned int cpu, struct hlist_node *node) +{ + struct virtnet_info *vi = hlist_entry_safe(node, struct virtnet_info, + node); + + virtnet_clean_affinity(vi); + return 0; +} + +static enum cpuhp_state virtionet_online; + +int virtnet_cpu_notif_add(struct virtnet_info *vi) +{ + int ret; + + ret = cpuhp_state_add_instance_nocalls(virtionet_online, &vi->node); + if (ret) + return ret; + ret = cpuhp_state_add_instance_nocalls(CPUHP_VIRT_NET_DEAD, + &vi->node_dead); + if (!ret) + return ret; + cpuhp_state_remove_instance_nocalls(virtionet_online, &vi->node); + return ret; +} + +void virtnet_cpu_notif_remove(struct virtnet_info *vi) +{ + cpuhp_state_remove_instance_nocalls(virtionet_online, &vi->node); + cpuhp_state_remove_instance_nocalls(CPUHP_VIRT_NET_DEAD, + &vi->node_dead); +} + +void virtnet_cpuhp_remove(void) +{ + cpuhp_remove_multi_state(CPUHP_VIRT_NET_DEAD); + cpuhp_remove_multi_state(virtionet_online); +} + +int virtnet_cpuhp_setup(void) +{ + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "virtio/net:online", + virtnet_cpu_online, + virtnet_cpu_down_prep); + if (ret < 0) + return ret; + + virtionet_online = ret; + ret = cpuhp_setup_state_multi(CPUHP_VIRT_NET_DEAD, "virtio/net:dead", + NULL, virtnet_cpu_dead); + if (ret) { + cpuhp_remove_multi_state(virtionet_online); + return ret; + } + + return 0; +} diff --git a/drivers/net/virtio/virtnet_common.h b/drivers/net/virtio/virtnet_common.h new file mode 100644 index 000000000000..0ee955950e5a --- /dev/null +++ b/drivers/net/virtio/virtnet_common.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __VIRTNET_COMMON_H__ +#define __VIRTNET_COMMON_H__ + +void virtnet_clean_affinity(struct virtnet_info *vi); +void virtnet_set_affinity(struct virtnet_info *vi); +int virtnet_cpu_notif_add(struct virtnet_info *vi); +void virtnet_cpu_notif_remove(struct virtnet_info *vi); + +void virtnet_cpuhp_remove(void); +int virtnet_cpuhp_setup(void); + +#endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 05/16] virtio_net: separate virtnet_ctrl_set_queues()
Separating the code setting queues by cq to a function. This is to facilitate separating cq-related functions into a separate file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 3fcf70782d97..0196492f289b 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -2078,19 +2078,25 @@ static void virtnet_ack_link_announce(struct virtnet_info *vi) rtnl_unlock(); } -static int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +static int virtnet_ctrl_set_queues(struct virtnet_info *vi, u16 queue_pairs) { struct scatterlist sg; + + vi->ctrl->mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); + sg_init_one(&sg, &vi->ctrl->mq, sizeof(vi->ctrl->mq)); + + return virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, + VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg); +} + +static int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +{ struct net_device *dev = vi->dev; if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ)) return 0; - vi->ctrl->mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); - sg_init_one(&sg, &vi->ctrl->mq, sizeof(vi->ctrl->mq)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, - VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg)) { + if (!virtnet_ctrl_set_queues(vi, queue_pairs)) { dev_warn(&dev->dev, "Fail to set num of queue pairs to %d\n", queue_pairs); return -EINVAL; -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 06/16] virtio_net: separate virtnet_ctrl_set_mac_address()
Separating the code setting MAC by cq to a function. This is to facilitate separating cq-related functions into a separate file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 0196492f289b..6ad217af44d9 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -1982,13 +1982,29 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, return vi->ctrl->status == VIRTIO_NET_OK; } +static int virtnet_ctrl_set_mac_address(struct virtnet_info *vi, const void *addr, int len) +{ + struct virtio_device *vdev = vi->vdev; + struct scatterlist sg; + + sg_init_one(&sg, addr, len); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, + VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { + dev_warn(&vdev->dev, + "Failed to set mac address by vq command.\n"); + return -EINVAL; + } + + return 0; +} + static int virtnet_set_mac_address(struct net_device *dev, void *p) { struct virtnet_info *vi = netdev_priv(dev); struct virtio_device *vdev = vi->vdev; int ret; struct sockaddr *addr; - struct scatterlist sg; if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STANDBY)) return -EOPNOTSUPP; @@ -2002,11 +2018,7 @@ static int virtnet_set_mac_address(struct net_device *dev, void *p) goto out; if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) { - sg_init_one(&sg, addr->sa_data, dev->addr_len); - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, - VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { - dev_warn(&vdev->dev, - "Failed to set mac address by vq command.\n"); + if (virtnet_ctrl_set_mac_address(vi, addr->sa_data, dev->addr_len)) { ret = -EINVAL; goto out; } @@ -3822,12 +3834,7 @@ static int virtnet_probe(struct virtio_device *vdev) */ if (!virtio_has_feature(vdev, VIRTIO_NET_F_MAC) && virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) { - struct scatterlist sg; - - sg_init_one(&sg, dev->dev_addr, dev->addr_len); - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, - VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { - pr_debug("virtio_net: setting MAC address failed\n"); + if (virtnet_ctrl_set_mac_address(vi, dev->dev_addr, dev->addr_len)) { rtnl_unlock(); err = -EINVAL; goto free_unregister_netdev; -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 07/16] virtio_net: remove lock from virtnet_ack_link_announce()
Removing rtnl_lock() from virtnet_ack_link_announce(). This is to facilitate separating cq-related functions into a separate file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 6ad217af44d9..4a3b5efb674e 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -2083,11 +2083,9 @@ static void virtnet_stats(struct net_device *dev, static void virtnet_ack_link_announce(struct virtnet_info *vi) { - rtnl_lock(); if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_ANNOUNCE, VIRTIO_NET_CTRL_ANNOUNCE_ACK, NULL)) dev_warn(&vi->dev->dev, "Failed to ack link announce.\n"); - rtnl_unlock(); } static int virtnet_ctrl_set_queues(struct virtnet_info *vi, u16 queue_pairs) @@ -3187,7 +3185,10 @@ static void virtnet_config_changed_work(struct work_struct *work) if (v & VIRTIO_NET_S_ANNOUNCE) { netdev_notify_peers(vi->dev); + + rtnl_lock(); virtnet_ack_link_announce(vi); + rtnl_unlock(); } /* Ignore unknown (future) status bits */ -- 2.32.0.3.g01195cf9f
Separating the APIs of cq into a file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/Makefile | 2 +- drivers/net/virtio/virtnet.c | 299 +----------------------------- drivers/net/virtio/virtnet_ctrl.c | 272 +++++++++++++++++++++++++++ drivers/net/virtio/virtnet_ctrl.h | 45 +++++ 4 files changed, 319 insertions(+), 299 deletions(-) create mode 100644 drivers/net/virtio/virtnet_ctrl.c create mode 100644 drivers/net/virtio/virtnet_ctrl.h diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index 3bef2b51876c..a2d80f95c921 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -5,4 +5,4 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o -virtio_net-y := virtnet.o virtnet_common.o +virtio_net-y := virtnet.o virtnet_common.o virtnet_ctrl.o diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 4a3b5efb674e..84b90333dc77 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -22,6 +22,7 @@ #include "virtnet.h" #include "virtnet_common.h" +#include "virtnet_ctrl.h" static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -84,36 +85,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { #define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) #define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) -/* This structure can contain rss message with maximum settings for indirection table and keysize - * Note, that default structure that describes RSS configuration virtio_net_rss_config - * contains same info but can't handle table values. - * In any case, structure would be passed to virtio hw through sg_buf split by parts - * because table sizes may be differ according to the device configuration. - */ -#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40 -#define VIRTIO_NET_RSS_MAX_TABLE_LEN 128 -struct virtio_net_ctrl_rss { - u32 hash_types; - u16 indirection_table_mask; - u16 unclassified_queue; - u16 indirection_table[VIRTIO_NET_RSS_MAX_TABLE_LEN]; - u16 max_tx_vq; - u8 hash_key_length; - u8 key[VIRTIO_NET_RSS_MAX_KEY_SIZE]; -}; - -/* Control VQ buffers: protected by the rtnl lock */ -struct control_buf { - struct virtio_net_ctrl_hdr hdr; - virtio_net_ctrl_ack status; - struct virtio_net_ctrl_mq mq; - u8 promisc; - u8 allmulti; - __virtio16 vid; - __virtio64 offloads; - struct virtio_net_ctrl_rss rss; -}; - struct padded_vnet_hdr { struct virtio_net_hdr_v1_hash hdr; /* @@ -1932,73 +1903,6 @@ static int virtnet_tx_resize(struct virtnet_info *vi, return err; } -/* - * Send command via the control virtqueue and check status. Commands - * supported by the hypervisor, as indicated by feature bits, should - * never fail unless improperly formatted. - */ -static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, - struct scatterlist *out) -{ - struct scatterlist *sgs[4], hdr, stat; - unsigned out_num = 0, tmp; - int ret; - - /* Caller should know better */ - BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ)); - - vi->ctrl->status = ~0; - vi->ctrl->hdr.class = class; - vi->ctrl->hdr.cmd = cmd; - /* Add header */ - sg_init_one(&hdr, &vi->ctrl->hdr, sizeof(vi->ctrl->hdr)); - sgs[out_num++] = &hdr; - - if (out) - sgs[out_num++] = out; - - /* Add return status. */ - sg_init_one(&stat, &vi->ctrl->status, sizeof(vi->ctrl->status)); - sgs[out_num] = &stat; - - BUG_ON(out_num + 1 > ARRAY_SIZE(sgs)); - ret = virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC); - if (ret < 0) { - dev_warn(&vi->vdev->dev, - "Failed to add sgs for command vq: %d\n.", ret); - return false; - } - - if (unlikely(!virtqueue_kick(vi->cvq))) - return vi->ctrl->status == VIRTIO_NET_OK; - - /* Spin for a response, the kick causes an ioport write, trapping - * into the hypervisor, so the request should be handled immediately. - */ - while (!virtqueue_get_buf(vi->cvq, &tmp) && - !virtqueue_is_broken(vi->cvq)) - cpu_relax(); - - return vi->ctrl->status == VIRTIO_NET_OK; -} - -static int virtnet_ctrl_set_mac_address(struct virtnet_info *vi, const void *addr, int len) -{ - struct virtio_device *vdev = vi->vdev; - struct scatterlist sg; - - sg_init_one(&sg, addr, len); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, - VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { - dev_warn(&vdev->dev, - "Failed to set mac address by vq command.\n"); - return -EINVAL; - } - - return 0; -} - static int virtnet_set_mac_address(struct net_device *dev, void *p) { struct virtnet_info *vi = netdev_priv(dev); @@ -2081,24 +1985,6 @@ static void virtnet_stats(struct net_device *dev, tot->rx_frame_errors = dev->stats.rx_frame_errors; } -static void virtnet_ack_link_announce(struct virtnet_info *vi) -{ - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_ANNOUNCE, - VIRTIO_NET_CTRL_ANNOUNCE_ACK, NULL)) - dev_warn(&vi->dev->dev, "Failed to ack link announce.\n"); -} - -static int virtnet_ctrl_set_queues(struct virtnet_info *vi, u16 queue_pairs) -{ - struct scatterlist sg; - - vi->ctrl->mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); - sg_init_one(&sg, &vi->ctrl->mq, sizeof(vi->ctrl->mq)); - - return virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, - VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg); -} - static int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) { struct net_device *dev = vi->dev; @@ -2149,106 +2035,6 @@ static int virtnet_close(struct net_device *dev) return 0; } -static void virtnet_set_rx_mode(struct net_device *dev) -{ - struct virtnet_info *vi = netdev_priv(dev); - struct scatterlist sg[2]; - struct virtio_net_ctrl_mac *mac_data; - struct netdev_hw_addr *ha; - int uc_count; - int mc_count; - void *buf; - int i; - - /* We can't dynamically set ndo_set_rx_mode, so return gracefully */ - if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_RX)) - return; - - vi->ctrl->promisc = ((dev->flags & IFF_PROMISC) != 0); - vi->ctrl->allmulti = ((dev->flags & IFF_ALLMULTI) != 0); - - sg_init_one(sg, &vi->ctrl->promisc, sizeof(vi->ctrl->promisc)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, - VIRTIO_NET_CTRL_RX_PROMISC, sg)) - dev_warn(&dev->dev, "Failed to %sable promisc mode.\n", - vi->ctrl->promisc ? "en" : "dis"); - - sg_init_one(sg, &vi->ctrl->allmulti, sizeof(vi->ctrl->allmulti)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, - VIRTIO_NET_CTRL_RX_ALLMULTI, sg)) - dev_warn(&dev->dev, "Failed to %sable allmulti mode.\n", - vi->ctrl->allmulti ? "en" : "dis"); - - uc_count = netdev_uc_count(dev); - mc_count = netdev_mc_count(dev); - /* MAC filter - use one buffer for both lists */ - buf = kzalloc(((uc_count + mc_count) * ETH_ALEN) + - (2 * sizeof(mac_data->entries)), GFP_ATOMIC); - mac_data = buf; - if (!buf) - return; - - sg_init_table(sg, 2); - - /* Store the unicast list and count in the front of the buffer */ - mac_data->entries = cpu_to_virtio32(vi->vdev, uc_count); - i = 0; - netdev_for_each_uc_addr(ha, dev) - memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); - - sg_set_buf(&sg[0], mac_data, - sizeof(mac_data->entries) + (uc_count * ETH_ALEN)); - - /* multicast list and count fill the end */ - mac_data = (void *)&mac_data->macs[uc_count][0]; - - mac_data->entries = cpu_to_virtio32(vi->vdev, mc_count); - i = 0; - netdev_for_each_mc_addr(ha, dev) - memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); - - sg_set_buf(&sg[1], mac_data, - sizeof(mac_data->entries) + (mc_count * ETH_ALEN)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, - VIRTIO_NET_CTRL_MAC_TABLE_SET, sg)) - dev_warn(&dev->dev, "Failed to set MAC filter table.\n"); - - kfree(buf); -} - -static int virtnet_vlan_rx_add_vid(struct net_device *dev, - __be16 proto, u16 vid) -{ - struct virtnet_info *vi = netdev_priv(dev); - struct scatterlist sg; - - vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); - sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, - VIRTIO_NET_CTRL_VLAN_ADD, &sg)) - dev_warn(&dev->dev, "Failed to add VLAN ID %d.\n", vid); - return 0; -} - -static int virtnet_vlan_rx_kill_vid(struct net_device *dev, - __be16 proto, u16 vid) -{ - struct virtnet_info *vi = netdev_priv(dev); - struct scatterlist sg; - - vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); - sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, - VIRTIO_NET_CTRL_VLAN_DEL, &sg)) - dev_warn(&dev->dev, "Failed to kill VLAN ID %d.\n", vid); - return 0; -} - static void virtnet_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring, struct kernel_ethtool_ringparam *kernel_ring, @@ -2309,37 +2095,6 @@ static int virtnet_set_ringparam(struct net_device *dev, return 0; } -static bool virtnet_commit_rss_command(struct virtnet_info *vi) -{ - struct net_device *dev = vi->dev; - struct scatterlist sgs[4]; - unsigned int sg_buf_size; - - /* prepare sgs */ - sg_init_table(sgs, 4); - - sg_buf_size = offsetof(struct virtio_net_ctrl_rss, indirection_table); - sg_set_buf(&sgs[0], &vi->ctrl->rss, sg_buf_size); - - sg_buf_size = sizeof(uint16_t) * (vi->ctrl->rss.indirection_table_mask + 1); - sg_set_buf(&sgs[1], vi->ctrl->rss.indirection_table, sg_buf_size); - - sg_buf_size = offsetof(struct virtio_net_ctrl_rss, key) - - offsetof(struct virtio_net_ctrl_rss, max_tx_vq); - sg_set_buf(&sgs[2], &vi->ctrl->rss.max_tx_vq, sg_buf_size); - - sg_buf_size = vi->rss_key_size; - sg_set_buf(&sgs[3], vi->ctrl->rss.key, sg_buf_size); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, - vi->has_rss ? VIRTIO_NET_CTRL_MQ_RSS_CONFIG - : VIRTIO_NET_CTRL_MQ_HASH_CONFIG, sgs)) { - dev_warn(&dev->dev, "VIRTIONET issue with committing RSS sgs\n"); - return false; - } - return true; -} - static void virtnet_init_default_rss(struct virtnet_info *vi) { u32 indir_val = 0; @@ -2636,42 +2391,6 @@ static int virtnet_get_link_ksettings(struct net_device *dev, return 0; } -static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, - struct ethtool_coalesce *ec) -{ - struct scatterlist sgs_tx, sgs_rx; - struct virtio_net_ctrl_coal_tx coal_tx; - struct virtio_net_ctrl_coal_rx coal_rx; - - coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); - coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); - sg_init_one(&sgs_tx, &coal_tx, sizeof(coal_tx)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, - VIRTIO_NET_CTRL_NOTF_COAL_TX_SET, - &sgs_tx)) - return -EINVAL; - - /* Save parameters */ - vi->tx_usecs = ec->tx_coalesce_usecs; - vi->tx_max_packets = ec->tx_max_coalesced_frames; - - coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); - coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); - sg_init_one(&sgs_rx, &coal_rx, sizeof(coal_rx)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, - VIRTIO_NET_CTRL_NOTF_COAL_RX_SET, - &sgs_rx)) - return -EINVAL; - - /* Save parameters */ - vi->rx_usecs = ec->rx_coalesce_usecs; - vi->rx_max_packets = ec->rx_max_coalesced_frames; - - return 0; -} - static int virtnet_coal_params_supported(struct ethtool_coalesce *ec) { /* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL @@ -2922,22 +2641,6 @@ static int virtnet_restore_up(struct virtio_device *vdev) return err; } -static int virtnet_set_guest_offloads(struct virtnet_info *vi, u64 offloads) -{ - struct scatterlist sg; - vi->ctrl->offloads = cpu_to_virtio64(vi->vdev, offloads); - - sg_init_one(&sg, &vi->ctrl->offloads, sizeof(vi->ctrl->offloads)); - - if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_GUEST_OFFLOADS, - VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET, &sg)) { - dev_warn(&vi->dev->dev, "Fail to set guest offload.\n"); - return -EINVAL; - } - - return 0; -} - static int virtnet_clear_guest_offloads(struct virtnet_info *vi) { u64 offloads = 0; diff --git a/drivers/net/virtio/virtnet_ctrl.c b/drivers/net/virtio/virtnet_ctrl.c new file mode 100644 index 000000000000..4b5ffa9eedd4 --- /dev/null +++ b/drivers/net/virtio/virtnet_ctrl.c @@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +// +#include <linux/netdevice.h> +#include <linux/virtio_net.h> + +#include "virtnet.h" +#include "virtnet_ctrl.h" + +/* Send command via the control virtqueue and check status. Commands + * supported by the hypervisor, as indicated by feature bits, should + * never fail unless improperly formatted. + */ +static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, + struct scatterlist *out) +{ + struct scatterlist *sgs[4], hdr, stat; + unsigned int out_num = 0, tmp; + int ret; + + /* Caller should know better */ + BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ)); + + vi->ctrl->status = ~0; + vi->ctrl->hdr.class = class; + vi->ctrl->hdr.cmd = cmd; + /* Add header */ + sg_init_one(&hdr, &vi->ctrl->hdr, sizeof(vi->ctrl->hdr)); + sgs[out_num++] = &hdr; + + if (out) + sgs[out_num++] = out; + + /* Add return status. */ + sg_init_one(&stat, &vi->ctrl->status, sizeof(vi->ctrl->status)); + sgs[out_num] = &stat; + + BUG_ON(out_num + 1 > ARRAY_SIZE(sgs)); + ret = virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC); + if (ret < 0) { + dev_warn(&vi->vdev->dev, + "Failed to add sgs for command vq: %d\n.", ret); + return false; + } + + if (unlikely(!virtqueue_kick(vi->cvq))) + return vi->ctrl->status == VIRTIO_NET_OK; + + /* Spin for a response, the kick causes an ioport write, trapping + * into the hypervisor, so the request should be handled immediately. + */ + while (!virtqueue_get_buf(vi->cvq, &tmp) && + !virtqueue_is_broken(vi->cvq)) + cpu_relax(); + + return vi->ctrl->status == VIRTIO_NET_OK; +} + +int virtnet_ctrl_set_mac_address(struct virtnet_info *vi, const void *addr, int len) +{ + struct virtio_device *vdev = vi->vdev; + struct scatterlist sg; + + sg_init_one(&sg, addr, len); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, + VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { + dev_warn(&vdev->dev, + "Failed to set mac address by vq command.\n"); + return -EINVAL; + } + + return 0; +} + +void virtnet_ack_link_announce(struct virtnet_info *vi) +{ + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_ANNOUNCE, + VIRTIO_NET_CTRL_ANNOUNCE_ACK, NULL)) + dev_warn(&vi->dev->dev, "Failed to ack link announce.\n"); +} + +int virtnet_ctrl_set_queues(struct virtnet_info *vi, u16 queue_pairs) +{ + struct scatterlist sg; + + vi->ctrl->mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); + sg_init_one(&sg, &vi->ctrl->mq, sizeof(vi->ctrl->mq)); + + return virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, + VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg); +} + +void virtnet_set_rx_mode(struct net_device *dev) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct scatterlist sg[2]; + struct virtio_net_ctrl_mac *mac_data; + struct netdev_hw_addr *ha; + int uc_count; + int mc_count; + void *buf; + int i; + + /* We can't dynamically set ndo_set_rx_mode, so return gracefully */ + if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_RX)) + return; + + vi->ctrl->promisc = ((dev->flags & IFF_PROMISC) != 0); + vi->ctrl->allmulti = ((dev->flags & IFF_ALLMULTI) != 0); + + sg_init_one(sg, &vi->ctrl->promisc, sizeof(vi->ctrl->promisc)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, + VIRTIO_NET_CTRL_RX_PROMISC, sg)) + dev_warn(&dev->dev, "Failed to %sable promisc mode.\n", + vi->ctrl->promisc ? "en" : "dis"); + + sg_init_one(sg, &vi->ctrl->allmulti, sizeof(vi->ctrl->allmulti)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, + VIRTIO_NET_CTRL_RX_ALLMULTI, sg)) + dev_warn(&dev->dev, "Failed to %sable allmulti mode.\n", + vi->ctrl->allmulti ? "en" : "dis"); + + uc_count = netdev_uc_count(dev); + mc_count = netdev_mc_count(dev); + /* MAC filter - use one buffer for both lists */ + buf = kzalloc(((uc_count + mc_count) * ETH_ALEN) + + (2 * sizeof(mac_data->entries)), GFP_ATOMIC); + mac_data = buf; + if (!buf) + return; + + sg_init_table(sg, 2); + + /* Store the unicast list and count in the front of the buffer */ + mac_data->entries = cpu_to_virtio32(vi->vdev, uc_count); + i = 0; + netdev_for_each_uc_addr(ha, dev) + memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); + + sg_set_buf(&sg[0], mac_data, + sizeof(mac_data->entries) + (uc_count * ETH_ALEN)); + + /* multicast list and count fill the end */ + mac_data = (void *)&mac_data->macs[uc_count][0]; + + mac_data->entries = cpu_to_virtio32(vi->vdev, mc_count); + i = 0; + netdev_for_each_mc_addr(ha, dev) + memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN); + + sg_set_buf(&sg[1], mac_data, + sizeof(mac_data->entries) + (mc_count * ETH_ALEN)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MAC, + VIRTIO_NET_CTRL_MAC_TABLE_SET, sg)) + dev_warn(&dev->dev, "Failed to set MAC filter table.\n"); + + kfree(buf); +} + +int virtnet_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct scatterlist sg; + + vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); + sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, + VIRTIO_NET_CTRL_VLAN_ADD, &sg)) + dev_warn(&dev->dev, "Failed to add VLAN ID %d.\n", vid); + return 0; +} + +int virtnet_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct scatterlist sg; + + vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); + sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, + VIRTIO_NET_CTRL_VLAN_DEL, &sg)) + dev_warn(&dev->dev, "Failed to kill VLAN ID %d.\n", vid); + return 0; +} + +bool virtnet_commit_rss_command(struct virtnet_info *vi) +{ + struct net_device *dev = vi->dev; + struct scatterlist sgs[4]; + unsigned int sg_buf_size; + + /* prepare sgs */ + sg_init_table(sgs, 4); + + sg_buf_size = offsetof(struct virtio_net_ctrl_rss, indirection_table); + sg_set_buf(&sgs[0], &vi->ctrl->rss, sg_buf_size); + + sg_buf_size = sizeof(uint16_t) * (vi->ctrl->rss.indirection_table_mask + 1); + sg_set_buf(&sgs[1], vi->ctrl->rss.indirection_table, sg_buf_size); + + sg_buf_size = offsetof(struct virtio_net_ctrl_rss, key) + - offsetof(struct virtio_net_ctrl_rss, max_tx_vq); + sg_set_buf(&sgs[2], &vi->ctrl->rss.max_tx_vq, sg_buf_size); + + sg_buf_size = vi->rss_key_size; + sg_set_buf(&sgs[3], vi->ctrl->rss.key, sg_buf_size); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, + vi->has_rss ? VIRTIO_NET_CTRL_MQ_RSS_CONFIG + : VIRTIO_NET_CTRL_MQ_HASH_CONFIG, sgs)) { + dev_warn(&dev->dev, "VIRTIONET issue with committing RSS sgs\n"); + return false; + } + return true; +} + +int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, struct ethtool_coalesce *ec) +{ + struct scatterlist sgs_tx, sgs_rx; + struct virtio_net_ctrl_coal_tx coal_tx; + struct virtio_net_ctrl_coal_rx coal_rx; + + coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); + coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); + sg_init_one(&sgs_tx, &coal_tx, sizeof(coal_tx)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, + VIRTIO_NET_CTRL_NOTF_COAL_TX_SET, + &sgs_tx)) + return -EINVAL; + + /* Save parameters */ + vi->tx_usecs = ec->tx_coalesce_usecs; + vi->tx_max_packets = ec->tx_max_coalesced_frames; + + coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); + coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); + sg_init_one(&sgs_rx, &coal_rx, sizeof(coal_rx)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, + VIRTIO_NET_CTRL_NOTF_COAL_RX_SET, + &sgs_rx)) + return -EINVAL; + + /* Save parameters */ + vi->rx_usecs = ec->rx_coalesce_usecs; + vi->rx_max_packets = ec->rx_max_coalesced_frames; + + return 0; +} + +int virtnet_set_guest_offloads(struct virtnet_info *vi, u64 offloads) +{ + struct scatterlist sg; + vi->ctrl->offloads = cpu_to_virtio64(vi->vdev, offloads); + + sg_init_one(&sg, &vi->ctrl->offloads, sizeof(vi->ctrl->offloads)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_GUEST_OFFLOADS, + VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET, &sg)) { + dev_warn(&vi->dev->dev, "Fail to set guest offload.\n"); + return -EINVAL; + } + + return 0; +} + diff --git a/drivers/net/virtio/virtnet_ctrl.h b/drivers/net/virtio/virtnet_ctrl.h new file mode 100644 index 000000000000..f5cd3099264b --- /dev/null +++ b/drivers/net/virtio/virtnet_ctrl.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __VIRTNET_CTRL_H__ +#define __VIRTNET_CTRL_H__ + +/* This structure can contain rss message with maximum settings for indirection table and keysize + * Note, that default structure that describes RSS configuration virtio_net_rss_config + * contains same info but can't handle table values. + * In any case, structure would be passed to virtio hw through sg_buf split by parts + * because table sizes may be differ according to the device configuration. + */ +#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40 +#define VIRTIO_NET_RSS_MAX_TABLE_LEN 128 +struct virtio_net_ctrl_rss { + u32 hash_types; + u16 indirection_table_mask; + u16 unclassified_queue; + u16 indirection_table[VIRTIO_NET_RSS_MAX_TABLE_LEN]; + u16 max_tx_vq; + u8 hash_key_length; + u8 key[VIRTIO_NET_RSS_MAX_KEY_SIZE]; +}; + +/* Control VQ buffers: protected by the rtnl lock */ +struct control_buf { + struct virtio_net_ctrl_hdr hdr; + virtio_net_ctrl_ack status; + struct virtio_net_ctrl_mq mq; + u8 promisc; + u8 allmulti; + __virtio16 vid; + __virtio64 offloads; + struct virtio_net_ctrl_rss rss; +}; + +int virtnet_ctrl_set_mac_address(struct virtnet_info *vi, const void *addr, int len); +void virtnet_ack_link_announce(struct virtnet_info *vi); +int virtnet_ctrl_set_queues(struct virtnet_info *vi, u16 queue_pairs); +void virtnet_set_rx_mode(struct net_device *dev); +int virtnet_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid); +int virtnet_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid); +bool virtnet_commit_rss_command(struct virtnet_info *vi); +int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, struct ethtool_coalesce *ec); +int virtnet_set_guest_offloads(struct virtnet_info *vi, u64 offloads); +#endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 09/16] virtio_net: introduce virtnet_rq_update_stats()
Separating the code of updating rq stats. This is prepare for separating the code of ethtool. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 84b90333dc77..36c747e43b3f 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -1550,6 +1550,21 @@ static void refill_work(struct work_struct *work) } } +static void virtnet_rq_update_stats(struct virtnet_rq *rq, struct virtnet_rq_stats *stats) +{ + int i; + + u64_stats_update_begin(&rq->stats.syncp); + for (i = 0; i < VIRTNET_RQ_STATS_LEN; i++) { + size_t offset = virtnet_rq_stats_desc[i].offset; + u64 *item; + + item = (u64 *)((u8 *)&rq->stats + offset); + *item += *(u64 *)((u8 *)stats + offset); + } + u64_stats_update_end(&rq->stats.syncp); +} + static int virtnet_receive(struct virtnet_rq *rq, int budget, unsigned int *xdp_xmit) { @@ -1557,7 +1572,6 @@ static int virtnet_receive(struct virtnet_rq *rq, int budget, struct virtnet_rq_stats stats = {}; unsigned int len; void *buf; - int i; if (!vi->big_packets || vi->mergeable_rx_bufs) { void *ctx; @@ -1584,15 +1598,7 @@ static int virtnet_receive(struct virtnet_rq *rq, int budget, } } - u64_stats_update_begin(&rq->stats.syncp); - for (i = 0; i < VIRTNET_RQ_STATS_LEN; i++) { - size_t offset = virtnet_rq_stats_desc[i].offset; - u64 *item; - - item = (u64 *)((u8 *)&rq->stats + offset); - *item += *(u64 *)((u8 *)&stats + offset); - } - u64_stats_update_end(&rq->stats.syncp); + virtnet_rq_update_stats(rq, &stats); return stats.packets; } -- 2.32.0.3.g01195cf9f
Put the functions of ethtool in an independent file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/Makefile | 3 +- drivers/net/virtio/virtnet.c | 573 +------------------------- drivers/net/virtio/virtnet.h | 3 + drivers/net/virtio/virtnet_ethtool.c | 578 +++++++++++++++++++++++++++ drivers/net/virtio/virtnet_ethtool.h | 8 + 5 files changed, 596 insertions(+), 569 deletions(-) create mode 100644 drivers/net/virtio/virtnet_ethtool.c create mode 100644 drivers/net/virtio/virtnet_ethtool.h diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index a2d80f95c921..9b35fb00d6c7 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -5,4 +5,5 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o -virtio_net-y := virtnet.o virtnet_common.o virtnet_ctrl.o +virtio_net-y := virtnet.o virtnet_common.o virtnet_ctrl.o \ + virtnet_ethtool.o diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 36c747e43b3f..1323c6733f56 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -23,6 +23,7 @@ #include "virtnet.h" #include "virtnet_common.h" #include "virtnet_ctrl.h" +#include "virtnet_ethtool.h" static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -54,37 +55,6 @@ static const unsigned long guest_offloads[] = { (1ULL << VIRTIO_NET_F_GUEST_USO4) | \ (1ULL << VIRTIO_NET_F_GUEST_USO6)) -struct virtnet_stat_desc { - char desc[ETH_GSTRING_LEN]; - size_t offset; -}; - -#define VIRTNET_SQ_STAT(m) offsetof(struct virtnet_sq_stats, m) -#define VIRTNET_RQ_STAT(m) offsetof(struct virtnet_rq_stats, m) - -static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = { - { "packets", VIRTNET_SQ_STAT(packets) }, - { "bytes", VIRTNET_SQ_STAT(bytes) }, - { "xdp_tx", VIRTNET_SQ_STAT(xdp_tx) }, - { "xdp_tx_drops", VIRTNET_SQ_STAT(xdp_tx_drops) }, - { "kicks", VIRTNET_SQ_STAT(kicks) }, - { "tx_timeouts", VIRTNET_SQ_STAT(tx_timeouts) }, -}; - -static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { - { "packets", VIRTNET_RQ_STAT(packets) }, - { "bytes", VIRTNET_RQ_STAT(bytes) }, - { "drops", VIRTNET_RQ_STAT(drops) }, - { "xdp_packets", VIRTNET_RQ_STAT(xdp_packets) }, - { "xdp_tx", VIRTNET_RQ_STAT(xdp_tx) }, - { "xdp_redirects", VIRTNET_RQ_STAT(xdp_redirects) }, - { "xdp_drops", VIRTNET_RQ_STAT(xdp_drops) }, - { "kicks", VIRTNET_RQ_STAT(kicks) }, -}; - -#define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) -#define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) - struct padded_vnet_hdr { struct virtio_net_hdr_v1_hash hdr; /* @@ -1550,21 +1520,6 @@ static void refill_work(struct work_struct *work) } } -static void virtnet_rq_update_stats(struct virtnet_rq *rq, struct virtnet_rq_stats *stats) -{ - int i; - - u64_stats_update_begin(&rq->stats.syncp); - for (i = 0; i < VIRTNET_RQ_STATS_LEN; i++) { - size_t offset = virtnet_rq_stats_desc[i].offset; - u64 *item; - - item = (u64 *)((u8 *)&rq->stats + offset); - *item += *(u64 *)((u8 *)stats + offset); - } - u64_stats_update_end(&rq->stats.syncp); -} - static int virtnet_receive(struct virtnet_rq *rq, int budget, unsigned int *xdp_xmit) { @@ -1845,8 +1800,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -static int virtnet_rx_resize(struct virtnet_info *vi, - struct virtnet_rq *rq, u32 ring_num) +int virtnet_rx_resize(struct virtnet_info *vi, struct virtnet_rq *rq, u32 ring_num) { bool running = netif_running(vi->dev); int err, qindex; @@ -1868,8 +1822,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi, return err; } -static int virtnet_tx_resize(struct virtnet_info *vi, - struct virtnet_sq *sq, u32 ring_num) +int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, u32 ring_num) { bool running = netif_running(vi->dev); struct netdev_queue *txq; @@ -1991,7 +1944,7 @@ static void virtnet_stats(struct net_device *dev, tot->rx_frame_errors = dev->stats.rx_frame_errors; } -static int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) { struct net_device *dev = vi->dev; @@ -2041,66 +1994,6 @@ static int virtnet_close(struct net_device *dev) return 0; } -static void virtnet_get_ringparam(struct net_device *dev, - struct ethtool_ringparam *ring, - struct kernel_ethtool_ringparam *kernel_ring, - struct netlink_ext_ack *extack) -{ - struct virtnet_info *vi = netdev_priv(dev); - - ring->rx_max_pending = vi->rq[0].vq->num_max; - ring->tx_max_pending = vi->sq[0].vq->num_max; - ring->rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); - ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); -} - -static int virtnet_set_ringparam(struct net_device *dev, - struct ethtool_ringparam *ring, - struct kernel_ethtool_ringparam *kernel_ring, - struct netlink_ext_ack *extack) -{ - struct virtnet_info *vi = netdev_priv(dev); - u32 rx_pending, tx_pending; - struct virtnet_rq *rq; - struct virtnet_sq *sq; - int i, err; - - if (ring->rx_mini_pending || ring->rx_jumbo_pending) - return -EINVAL; - - rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); - tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); - - if (ring->rx_pending == rx_pending && - ring->tx_pending == tx_pending) - return 0; - - if (ring->rx_pending > vi->rq[0].vq->num_max) - return -EINVAL; - - if (ring->tx_pending > vi->sq[0].vq->num_max) - return -EINVAL; - - for (i = 0; i < vi->max_queue_pairs; i++) { - rq = vi->rq + i; - sq = vi->sq + i; - - if (ring->tx_pending != tx_pending) { - err = virtnet_tx_resize(vi, sq, ring->tx_pending); - if (err) - return err; - } - - if (ring->rx_pending != rx_pending) { - err = virtnet_rx_resize(vi, rq, ring->rx_pending); - if (err) - return err; - } - } - - return 0; -} - static void virtnet_init_default_rss(struct virtnet_info *vi) { u32 indir_val = 0; @@ -2123,351 +2016,6 @@ static void virtnet_init_default_rss(struct virtnet_info *vi) netdev_rss_key_fill(vi->ctrl->rss.key, vi->rss_key_size); } -static void virtnet_get_hashflow(const struct virtnet_info *vi, struct ethtool_rxnfc *info) -{ - info->data = 0; - switch (info->flow_type) { - case TCP_V4_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv4) { - info->data = RXH_IP_SRC | RXH_IP_DST | - RXH_L4_B_0_1 | RXH_L4_B_2_3; - } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) { - info->data = RXH_IP_SRC | RXH_IP_DST; - } - break; - case TCP_V6_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv6) { - info->data = RXH_IP_SRC | RXH_IP_DST | - RXH_L4_B_0_1 | RXH_L4_B_2_3; - } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) { - info->data = RXH_IP_SRC | RXH_IP_DST; - } - break; - case UDP_V4_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv4) { - info->data = RXH_IP_SRC | RXH_IP_DST | - RXH_L4_B_0_1 | RXH_L4_B_2_3; - } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) { - info->data = RXH_IP_SRC | RXH_IP_DST; - } - break; - case UDP_V6_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv6) { - info->data = RXH_IP_SRC | RXH_IP_DST | - RXH_L4_B_0_1 | RXH_L4_B_2_3; - } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) { - info->data = RXH_IP_SRC | RXH_IP_DST; - } - break; - case IPV4_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) - info->data = RXH_IP_SRC | RXH_IP_DST; - - break; - case IPV6_FLOW: - if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) - info->data = RXH_IP_SRC | RXH_IP_DST; - - break; - default: - info->data = 0; - break; - } -} - -static bool virtnet_set_hashflow(struct virtnet_info *vi, struct ethtool_rxnfc *info) -{ - u32 new_hashtypes = vi->rss_hash_types_saved; - bool is_disable = info->data & RXH_DISCARD; - bool is_l4 = info->data == (RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 | RXH_L4_B_2_3); - - /* supports only 'sd', 'sdfn' and 'r' */ - if (!((info->data == (RXH_IP_SRC | RXH_IP_DST)) | is_l4 | is_disable)) - return false; - - switch (info->flow_type) { - case TCP_V4_FLOW: - new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv4 | VIRTIO_NET_RSS_HASH_TYPE_TCPv4); - if (!is_disable) - new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv4 - | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_TCPv4 : 0); - break; - case UDP_V4_FLOW: - new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv4 | VIRTIO_NET_RSS_HASH_TYPE_UDPv4); - if (!is_disable) - new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv4 - | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_UDPv4 : 0); - break; - case IPV4_FLOW: - new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv4; - if (!is_disable) - new_hashtypes = VIRTIO_NET_RSS_HASH_TYPE_IPv4; - break; - case TCP_V6_FLOW: - new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv6 | VIRTIO_NET_RSS_HASH_TYPE_TCPv6); - if (!is_disable) - new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv6 - | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_TCPv6 : 0); - break; - case UDP_V6_FLOW: - new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv6 | VIRTIO_NET_RSS_HASH_TYPE_UDPv6); - if (!is_disable) - new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv6 - | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_UDPv6 : 0); - break; - case IPV6_FLOW: - new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv6; - if (!is_disable) - new_hashtypes = VIRTIO_NET_RSS_HASH_TYPE_IPv6; - break; - default: - /* unsupported flow */ - return false; - } - - /* if unsupported hashtype was set */ - if (new_hashtypes != (new_hashtypes & vi->rss_hash_types_supported)) - return false; - - if (new_hashtypes != vi->rss_hash_types_saved) { - vi->rss_hash_types_saved = new_hashtypes; - vi->ctrl->rss.hash_types = vi->rss_hash_types_saved; - if (vi->dev->features & NETIF_F_RXHASH) - return virtnet_commit_rss_command(vi); - } - - return true; -} - -static void virtnet_get_drvinfo(struct net_device *dev, - struct ethtool_drvinfo *info) -{ - struct virtnet_info *vi = netdev_priv(dev); - struct virtio_device *vdev = vi->vdev; - - strscpy(info->driver, KBUILD_MODNAME, sizeof(info->driver)); - strscpy(info->version, VIRTNET_DRIVER_VERSION, sizeof(info->version)); - strscpy(info->bus_info, virtio_bus_name(vdev), sizeof(info->bus_info)); - -} - -/* TODO: Eliminate OOO packets during switching */ -static int virtnet_set_channels(struct net_device *dev, - struct ethtool_channels *channels) -{ - struct virtnet_info *vi = netdev_priv(dev); - u16 queue_pairs = channels->combined_count; - int err; - - /* We don't support separate rx/tx channels. - * We don't allow setting 'other' channels. - */ - if (channels->rx_count || channels->tx_count || channels->other_count) - return -EINVAL; - - if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0) - return -EINVAL; - - /* For now we don't support modifying channels while XDP is loaded - * also when XDP is loaded all RX queues have XDP programs so we only - * need to check a single RX queue. - */ - if (vi->rq[0].xdp_prog) - return -EINVAL; - - cpus_read_lock(); - err = _virtnet_set_queues(vi, queue_pairs); - if (err) { - cpus_read_unlock(); - goto err; - } - virtnet_set_affinity(vi); - cpus_read_unlock(); - - netif_set_real_num_tx_queues(dev, queue_pairs); - netif_set_real_num_rx_queues(dev, queue_pairs); - err: - return err; -} - -static void virtnet_get_strings(struct net_device *dev, u32 stringset, u8 *data) -{ - struct virtnet_info *vi = netdev_priv(dev); - unsigned int i, j; - u8 *p = data; - - switch (stringset) { - case ETH_SS_STATS: - for (i = 0; i < vi->curr_queue_pairs; i++) { - for (j = 0; j < VIRTNET_RQ_STATS_LEN; j++) - ethtool_sprintf(&p, "rx_queue_%u_%s", i, - virtnet_rq_stats_desc[j].desc); - } - - for (i = 0; i < vi->curr_queue_pairs; i++) { - for (j = 0; j < VIRTNET_SQ_STATS_LEN; j++) - ethtool_sprintf(&p, "tx_queue_%u_%s", i, - virtnet_sq_stats_desc[j].desc); - } - break; - } -} - -static int virtnet_get_sset_count(struct net_device *dev, int sset) -{ - struct virtnet_info *vi = netdev_priv(dev); - - switch (sset) { - case ETH_SS_STATS: - return vi->curr_queue_pairs * (VIRTNET_RQ_STATS_LEN + - VIRTNET_SQ_STATS_LEN); - default: - return -EOPNOTSUPP; - } -} - -static void virtnet_get_ethtool_stats(struct net_device *dev, - struct ethtool_stats *stats, u64 *data) -{ - struct virtnet_info *vi = netdev_priv(dev); - unsigned int idx = 0, start, i, j; - const u8 *stats_base; - size_t offset; - - for (i = 0; i < vi->curr_queue_pairs; i++) { - struct virtnet_rq *rq = &vi->rq[i]; - - stats_base = (u8 *)&rq->stats; - do { - start = u64_stats_fetch_begin(&rq->stats.syncp); - for (j = 0; j < VIRTNET_RQ_STATS_LEN; j++) { - offset = virtnet_rq_stats_desc[j].offset; - data[idx + j] = *(u64 *)(stats_base + offset); - } - } while (u64_stats_fetch_retry(&rq->stats.syncp, start)); - idx += VIRTNET_RQ_STATS_LEN; - } - - for (i = 0; i < vi->curr_queue_pairs; i++) { - struct virtnet_sq *sq = &vi->sq[i]; - - stats_base = (u8 *)&sq->stats; - do { - start = u64_stats_fetch_begin(&sq->stats.syncp); - for (j = 0; j < VIRTNET_SQ_STATS_LEN; j++) { - offset = virtnet_sq_stats_desc[j].offset; - data[idx + j] = *(u64 *)(stats_base + offset); - } - } while (u64_stats_fetch_retry(&sq->stats.syncp, start)); - idx += VIRTNET_SQ_STATS_LEN; - } -} - -static void virtnet_get_channels(struct net_device *dev, - struct ethtool_channels *channels) -{ - struct virtnet_info *vi = netdev_priv(dev); - - channels->combined_count = vi->curr_queue_pairs; - channels->max_combined = vi->max_queue_pairs; - channels->max_other = 0; - channels->rx_count = 0; - channels->tx_count = 0; - channels->other_count = 0; -} - -static int virtnet_set_link_ksettings(struct net_device *dev, - const struct ethtool_link_ksettings *cmd) -{ - struct virtnet_info *vi = netdev_priv(dev); - - return ethtool_virtdev_set_link_ksettings(dev, cmd, - &vi->speed, &vi->duplex); -} - -static int virtnet_get_link_ksettings(struct net_device *dev, - struct ethtool_link_ksettings *cmd) -{ - struct virtnet_info *vi = netdev_priv(dev); - - cmd->base.speed = vi->speed; - cmd->base.duplex = vi->duplex; - cmd->base.port = PORT_OTHER; - - return 0; -} - -static int virtnet_coal_params_supported(struct ethtool_coalesce *ec) -{ - /* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL - * feature is negotiated. - */ - if (ec->rx_coalesce_usecs || ec->tx_coalesce_usecs) - return -EOPNOTSUPP; - - if (ec->tx_max_coalesced_frames > 1 || - ec->rx_max_coalesced_frames != 1) - return -EINVAL; - - return 0; -} - -static int virtnet_set_coalesce(struct net_device *dev, - struct ethtool_coalesce *ec, - struct kernel_ethtool_coalesce *kernel_coal, - struct netlink_ext_ack *extack) -{ - struct virtnet_info *vi = netdev_priv(dev); - int ret, i, napi_weight; - bool update_napi = false; - - /* Can't change NAPI weight if the link is up */ - napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; - if (napi_weight ^ vi->sq[0].napi.weight) { - if (dev->flags & IFF_UP) - return -EBUSY; - else - update_napi = true; - } - - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) - ret = virtnet_send_notf_coal_cmds(vi, ec); - else - ret = virtnet_coal_params_supported(ec); - - if (ret) - return ret; - - if (update_napi) { - for (i = 0; i < vi->max_queue_pairs; i++) - vi->sq[i].napi.weight = napi_weight; - } - - return ret; -} - -static int virtnet_get_coalesce(struct net_device *dev, - struct ethtool_coalesce *ec, - struct kernel_ethtool_coalesce *kernel_coal, - struct netlink_ext_ack *extack) -{ - struct virtnet_info *vi = netdev_priv(dev); - - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { - ec->rx_coalesce_usecs = vi->rx_usecs; - ec->tx_coalesce_usecs = vi->tx_usecs; - ec->tx_max_coalesced_frames = vi->tx_max_packets; - ec->rx_max_coalesced_frames = vi->rx_max_packets; - } else { - ec->rx_max_coalesced_frames = 1; - - if (vi->sq[0].napi.weight) - ec->tx_max_coalesced_frames = 1; - } - - return 0; -} - static void virtnet_init_settings(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -2495,117 +2043,6 @@ static void virtnet_update_settings(struct virtnet_info *vi) vi->duplex = duplex; } -static u32 virtnet_get_rxfh_key_size(struct net_device *dev) -{ - return ((struct virtnet_info *)netdev_priv(dev))->rss_key_size; -} - -static u32 virtnet_get_rxfh_indir_size(struct net_device *dev) -{ - return ((struct virtnet_info *)netdev_priv(dev))->rss_indir_table_size; -} - -static int virtnet_get_rxfh(struct net_device *dev, u32 *indir, u8 *key, u8 *hfunc) -{ - struct virtnet_info *vi = netdev_priv(dev); - int i; - - if (indir) { - for (i = 0; i < vi->rss_indir_table_size; ++i) - indir[i] = vi->ctrl->rss.indirection_table[i]; - } - - if (key) - memcpy(key, vi->ctrl->rss.key, vi->rss_key_size); - - if (hfunc) - *hfunc = ETH_RSS_HASH_TOP; - - return 0; -} - -static int virtnet_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key, const u8 hfunc) -{ - struct virtnet_info *vi = netdev_priv(dev); - int i; - - if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) - return -EOPNOTSUPP; - - if (indir) { - for (i = 0; i < vi->rss_indir_table_size; ++i) - vi->ctrl->rss.indirection_table[i] = indir[i]; - } - if (key) - memcpy(vi->ctrl->rss.key, key, vi->rss_key_size); - - virtnet_commit_rss_command(vi); - - return 0; -} - -static int virtnet_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, u32 *rule_locs) -{ - struct virtnet_info *vi = netdev_priv(dev); - int rc = 0; - - switch (info->cmd) { - case ETHTOOL_GRXRINGS: - info->data = vi->curr_queue_pairs; - break; - case ETHTOOL_GRXFH: - virtnet_get_hashflow(vi, info); - break; - default: - rc = -EOPNOTSUPP; - } - - return rc; -} - -static int virtnet_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info) -{ - struct virtnet_info *vi = netdev_priv(dev); - int rc = 0; - - switch (info->cmd) { - case ETHTOOL_SRXFH: - if (!virtnet_set_hashflow(vi, info)) - rc = -EINVAL; - - break; - default: - rc = -EOPNOTSUPP; - } - - return rc; -} - -static const struct ethtool_ops virtnet_ethtool_ops = { - .supported_coalesce_params = ETHTOOL_COALESCE_MAX_FRAMES | - ETHTOOL_COALESCE_USECS, - .get_drvinfo = virtnet_get_drvinfo, - .get_link = ethtool_op_get_link, - .get_ringparam = virtnet_get_ringparam, - .set_ringparam = virtnet_set_ringparam, - .get_strings = virtnet_get_strings, - .get_sset_count = virtnet_get_sset_count, - .get_ethtool_stats = virtnet_get_ethtool_stats, - .set_channels = virtnet_set_channels, - .get_channels = virtnet_get_channels, - .get_ts_info = ethtool_op_get_ts_info, - .get_link_ksettings = virtnet_get_link_ksettings, - .set_link_ksettings = virtnet_set_link_ksettings, - .set_coalesce = virtnet_set_coalesce, - .get_coalesce = virtnet_get_coalesce, - .get_rxfh_key_size = virtnet_get_rxfh_key_size, - .get_rxfh_indir_size = virtnet_get_rxfh_indir_size, - .get_rxfh = virtnet_get_rxfh, - .set_rxfh = virtnet_set_rxfh, - .get_rxnfc = virtnet_get_rxnfc, - .set_rxnfc = virtnet_set_rxnfc, -}; - static void virtnet_freeze_down(struct virtio_device *vdev) { struct virtnet_info *vi = vdev->priv; @@ -3352,7 +2789,7 @@ static int virtnet_probe(struct virtio_device *vdev) dev->netdev_ops = &virtnet_netdev; dev->features = NETIF_F_HIGHDMA; - dev->ethtool_ops = &virtnet_ethtool_ops; + dev->ethtool_ops = virtnet_get_ethtool_ops(); SET_NETDEV_DEV(dev, &vdev->dev); /* Do we support "hardware" checksums? */ diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 669e0499f340..b889825c54d0 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -181,4 +181,7 @@ struct virtnet_info { struct failover *failover; }; +int virtnet_rx_resize(struct virtnet_info *vi, struct virtnet_rq *rq, u32 ring_num); +int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, u32 ring_num); +int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs); #endif diff --git a/drivers/net/virtio/virtnet_ethtool.c b/drivers/net/virtio/virtnet_ethtool.c new file mode 100644 index 000000000000..ac0595744b4c --- /dev/null +++ b/drivers/net/virtio/virtnet_ethtool.c @@ -0,0 +1,578 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +// +#include <linux/netdevice.h> +#include <linux/ethtool.h> +#include <linux/cpu.h> +#include <linux/virtio.h> +#include <linux/virtio_net.h> + +#include "virtnet.h" +#include "virtnet_ctrl.h" +#include "virtnet_common.h" +#include "virtnet_ethtool.h" + +struct virtnet_stat_desc { + char desc[ETH_GSTRING_LEN]; + size_t offset; +}; + +#define VIRTNET_SQ_STAT(m) offsetof(struct virtnet_sq_stats, m) +#define VIRTNET_RQ_STAT(m) offsetof(struct virtnet_rq_stats, m) + +static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = { + { "packets", VIRTNET_SQ_STAT(packets) }, + { "bytes", VIRTNET_SQ_STAT(bytes) }, + { "xdp_tx", VIRTNET_SQ_STAT(xdp_tx) }, + { "xdp_tx_drops", VIRTNET_SQ_STAT(xdp_tx_drops) }, + { "kicks", VIRTNET_SQ_STAT(kicks) }, + { "tx_timeouts", VIRTNET_SQ_STAT(tx_timeouts) }, +}; + +static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { + { "packets", VIRTNET_RQ_STAT(packets) }, + { "bytes", VIRTNET_RQ_STAT(bytes) }, + { "drops", VIRTNET_RQ_STAT(drops) }, + { "xdp_packets", VIRTNET_RQ_STAT(xdp_packets) }, + { "xdp_tx", VIRTNET_RQ_STAT(xdp_tx) }, + { "xdp_redirects", VIRTNET_RQ_STAT(xdp_redirects) }, + { "xdp_drops", VIRTNET_RQ_STAT(xdp_drops) }, + { "kicks", VIRTNET_RQ_STAT(kicks) }, +}; + +#define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) +#define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) + +void virtnet_rq_update_stats(struct virtnet_rq *rq, struct virtnet_rq_stats *stats) +{ + int i; + + u64_stats_update_begin(&rq->stats.syncp); + for (i = 0; i < VIRTNET_RQ_STATS_LEN; i++) { + size_t offset = virtnet_rq_stats_desc[i].offset; + u64 *item; + + item = (u64 *)((u8 *)&rq->stats + offset); + *item += *(u64 *)((u8 *)stats + offset); + } + u64_stats_update_end(&rq->stats.syncp); +} + +static void virtnet_get_ringparam(struct net_device *dev, + struct ethtool_ringparam *ring, + struct kernel_ethtool_ringparam *kernel_ring, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + + ring->rx_max_pending = vi->rq[0].vq->num_max; + ring->tx_max_pending = vi->sq[0].vq->num_max; + ring->rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); + ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); +} + +static int virtnet_set_ringparam(struct net_device *dev, + struct ethtool_ringparam *ring, + struct kernel_ethtool_ringparam *kernel_ring, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + u32 rx_pending, tx_pending; + struct virtnet_rq *rq; + struct virtnet_sq *sq; + int i, err; + + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + rx_pending = virtqueue_get_vring_size(vi->rq[0].vq); + tx_pending = virtqueue_get_vring_size(vi->sq[0].vq); + + if (ring->rx_pending == rx_pending && + ring->tx_pending == tx_pending) + return 0; + + if (ring->rx_pending > vi->rq[0].vq->num_max) + return -EINVAL; + + if (ring->tx_pending > vi->sq[0].vq->num_max) + return -EINVAL; + + for (i = 0; i < vi->max_queue_pairs; i++) { + rq = vi->rq + i; + sq = vi->sq + i; + + if (ring->tx_pending != tx_pending) { + err = virtnet_tx_resize(vi, sq, ring->tx_pending); + if (err) + return err; + } + + if (ring->rx_pending != rx_pending) { + err = virtnet_rx_resize(vi, rq, ring->rx_pending); + if (err) + return err; + } + } + + return 0; +} + +static void virtnet_get_hashflow(const struct virtnet_info *vi, struct ethtool_rxnfc *info) +{ + info->data = 0; + switch (info->flow_type) { + case TCP_V4_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv4) { + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) { + info->data = RXH_IP_SRC | RXH_IP_DST; + } + break; + case TCP_V6_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv6) { + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) { + info->data = RXH_IP_SRC | RXH_IP_DST; + } + break; + case UDP_V4_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv4) { + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) { + info->data = RXH_IP_SRC | RXH_IP_DST; + } + break; + case UDP_V6_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv6) { + info->data = RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3; + } else if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) { + info->data = RXH_IP_SRC | RXH_IP_DST; + } + break; + case IPV4_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4) + info->data = RXH_IP_SRC | RXH_IP_DST; + + break; + case IPV6_FLOW: + if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv6) + info->data = RXH_IP_SRC | RXH_IP_DST; + + break; + default: + info->data = 0; + break; + } +} + +static bool virtnet_set_hashflow(struct virtnet_info *vi, struct ethtool_rxnfc *info) +{ + u32 new_hashtypes = vi->rss_hash_types_saved; + bool is_disable = info->data & RXH_DISCARD; + bool is_l4 = info->data == (RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 | RXH_L4_B_2_3); + + /* supports only 'sd', 'sdfn' and 'r' */ + if (!((info->data == (RXH_IP_SRC | RXH_IP_DST)) | is_l4 | is_disable)) + return false; + + switch (info->flow_type) { + case TCP_V4_FLOW: + new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv4 | VIRTIO_NET_RSS_HASH_TYPE_TCPv4); + if (!is_disable) + new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv4 + | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_TCPv4 : 0); + break; + case UDP_V4_FLOW: + new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv4 | VIRTIO_NET_RSS_HASH_TYPE_UDPv4); + if (!is_disable) + new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv4 + | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_UDPv4 : 0); + break; + case IPV4_FLOW: + new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv4; + if (!is_disable) + new_hashtypes = VIRTIO_NET_RSS_HASH_TYPE_IPv4; + break; + case TCP_V6_FLOW: + new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv6 | VIRTIO_NET_RSS_HASH_TYPE_TCPv6); + if (!is_disable) + new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv6 + | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_TCPv6 : 0); + break; + case UDP_V6_FLOW: + new_hashtypes &= ~(VIRTIO_NET_RSS_HASH_TYPE_IPv6 | VIRTIO_NET_RSS_HASH_TYPE_UDPv6); + if (!is_disable) + new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv6 + | (is_l4 ? VIRTIO_NET_RSS_HASH_TYPE_UDPv6 : 0); + break; + case IPV6_FLOW: + new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv6; + if (!is_disable) + new_hashtypes = VIRTIO_NET_RSS_HASH_TYPE_IPv6; + break; + default: + /* unsupported flow */ + return false; + } + + /* if unsupported hashtype was set */ + if (new_hashtypes != (new_hashtypes & vi->rss_hash_types_supported)) + return false; + + if (new_hashtypes != vi->rss_hash_types_saved) { + vi->rss_hash_types_saved = new_hashtypes; + vi->ctrl->rss.hash_types = vi->rss_hash_types_saved; + if (vi->dev->features & NETIF_F_RXHASH) + return virtnet_commit_rss_command(vi); + } + + return true; +} + +static void virtnet_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *info) +{ + struct virtnet_info *vi = netdev_priv(dev); + struct virtio_device *vdev = vi->vdev; + + strscpy(info->driver, KBUILD_MODNAME, sizeof(info->driver)); + strscpy(info->version, VIRTNET_DRIVER_VERSION, sizeof(info->version)); + strscpy(info->bus_info, virtio_bus_name(vdev), sizeof(info->bus_info)); +} + +/* TODO: Eliminate OOO packets during switching */ +static int virtnet_set_channels(struct net_device *dev, + struct ethtool_channels *channels) +{ + struct virtnet_info *vi = netdev_priv(dev); + u16 queue_pairs = channels->combined_count; + int err; + + /* We don't support separate rx/tx channels. + * We don't allow setting 'other' channels. + */ + if (channels->rx_count || channels->tx_count || channels->other_count) + return -EINVAL; + + if (queue_pairs > vi->max_queue_pairs || queue_pairs == 0) + return -EINVAL; + + /* For now we don't support modifying channels while XDP is loaded + * also when XDP is loaded all RX queues have XDP programs so we only + * need to check a single RX queue. + */ + if (vi->rq[0].xdp_prog) + return -EINVAL; + + cpus_read_lock(); + err = _virtnet_set_queues(vi, queue_pairs); + if (err) { + cpus_read_unlock(); + goto err; + } + virtnet_set_affinity(vi); + cpus_read_unlock(); + + netif_set_real_num_tx_queues(dev, queue_pairs); + netif_set_real_num_rx_queues(dev, queue_pairs); + err: + return err; +} + +static void virtnet_get_strings(struct net_device *dev, u32 stringset, u8 *data) +{ + struct virtnet_info *vi = netdev_priv(dev); + unsigned int i, j; + u8 *p = data; + + switch (stringset) { + case ETH_SS_STATS: + for (i = 0; i < vi->curr_queue_pairs; i++) { + for (j = 0; j < VIRTNET_RQ_STATS_LEN; j++) + ethtool_sprintf(&p, "rx_queue_%u_%s", i, + virtnet_rq_stats_desc[j].desc); + } + + for (i = 0; i < vi->curr_queue_pairs; i++) { + for (j = 0; j < VIRTNET_SQ_STATS_LEN; j++) + ethtool_sprintf(&p, "tx_queue_%u_%s", i, + virtnet_sq_stats_desc[j].desc); + } + break; + } +} + +static int virtnet_get_sset_count(struct net_device *dev, int sset) +{ + struct virtnet_info *vi = netdev_priv(dev); + + switch (sset) { + case ETH_SS_STATS: + return vi->curr_queue_pairs * (VIRTNET_RQ_STATS_LEN + + VIRTNET_SQ_STATS_LEN); + default: + return -EOPNOTSUPP; + } +} + +static void virtnet_get_ethtool_stats(struct net_device *dev, + struct ethtool_stats *stats, u64 *data) +{ + struct virtnet_info *vi = netdev_priv(dev); + unsigned int idx = 0, start, i, j; + const u8 *stats_base; + size_t offset; + + for (i = 0; i < vi->curr_queue_pairs; i++) { + struct virtnet_rq *rq = &vi->rq[i]; + + stats_base = (u8 *)&rq->stats; + do { + start = u64_stats_fetch_begin(&rq->stats.syncp); + for (j = 0; j < VIRTNET_RQ_STATS_LEN; j++) { + offset = virtnet_rq_stats_desc[j].offset; + data[idx + j] = *(u64 *)(stats_base + offset); + } + } while (u64_stats_fetch_retry(&rq->stats.syncp, start)); + idx += VIRTNET_RQ_STATS_LEN; + } + + for (i = 0; i < vi->curr_queue_pairs; i++) { + struct virtnet_sq *sq = &vi->sq[i]; + + stats_base = (u8 *)&sq->stats; + do { + start = u64_stats_fetch_begin(&sq->stats.syncp); + for (j = 0; j < VIRTNET_SQ_STATS_LEN; j++) { + offset = virtnet_sq_stats_desc[j].offset; + data[idx + j] = *(u64 *)(stats_base + offset); + } + } while (u64_stats_fetch_retry(&sq->stats.syncp, start)); + idx += VIRTNET_SQ_STATS_LEN; + } +} + +static void virtnet_get_channels(struct net_device *dev, + struct ethtool_channels *channels) +{ + struct virtnet_info *vi = netdev_priv(dev); + + channels->combined_count = vi->curr_queue_pairs; + channels->max_combined = vi->max_queue_pairs; + channels->max_other = 0; + channels->rx_count = 0; + channels->tx_count = 0; + channels->other_count = 0; +} + +static int virtnet_get_link_ksettings(struct net_device *dev, + struct ethtool_link_ksettings *cmd) +{ + struct virtnet_info *vi = netdev_priv(dev); + + cmd->base.speed = vi->speed; + cmd->base.duplex = vi->duplex; + cmd->base.port = PORT_OTHER; + + return 0; +} + +static int virtnet_set_link_ksettings(struct net_device *dev, + const struct ethtool_link_ksettings *cmd) +{ + struct virtnet_info *vi = netdev_priv(dev); + + return ethtool_virtdev_set_link_ksettings(dev, cmd, + &vi->speed, &vi->duplex); +} + +static int virtnet_coal_params_supported(struct ethtool_coalesce *ec) +{ + /* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL + * feature is negotiated. + */ + if (ec->rx_coalesce_usecs || ec->tx_coalesce_usecs) + return -EOPNOTSUPP; + + if (ec->tx_max_coalesced_frames > 1 || + ec->rx_max_coalesced_frames != 1) + return -EINVAL; + + return 0; +} + +static int virtnet_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + int ret, i, napi_weight; + bool update_napi = false; + + /* Can't change NAPI weight if the link is up */ + napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; + if (napi_weight ^ vi->sq[0].napi.weight) { + if (dev->flags & IFF_UP) + return -EBUSY; + else + update_napi = true; + } + + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) + ret = virtnet_send_notf_coal_cmds(vi, ec); + else + ret = virtnet_coal_params_supported(ec); + + if (ret) + return ret; + + if (update_napi) { + for (i = 0; i < vi->max_queue_pairs; i++) + vi->sq[i].napi.weight = napi_weight; + } + + return ret; +} + +static int virtnet_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { + ec->rx_coalesce_usecs = vi->rx_usecs; + ec->tx_coalesce_usecs = vi->tx_usecs; + ec->tx_max_coalesced_frames = vi->tx_max_packets; + ec->rx_max_coalesced_frames = vi->rx_max_packets; + } else { + ec->rx_max_coalesced_frames = 1; + + if (vi->sq[0].napi.weight) + ec->tx_max_coalesced_frames = 1; + } + + return 0; +} + +static u32 virtnet_get_rxfh_key_size(struct net_device *dev) +{ + return ((struct virtnet_info *)netdev_priv(dev))->rss_key_size; +} + +static u32 virtnet_get_rxfh_indir_size(struct net_device *dev) +{ + return ((struct virtnet_info *)netdev_priv(dev))->rss_indir_table_size; +} + +static int virtnet_get_rxfh(struct net_device *dev, u32 *indir, u8 *key, u8 *hfunc) +{ + struct virtnet_info *vi = netdev_priv(dev); + int i; + + if (indir) { + for (i = 0; i < vi->rss_indir_table_size; ++i) + indir[i] = vi->ctrl->rss.indirection_table[i]; + } + + if (key) + memcpy(key, vi->ctrl->rss.key, vi->rss_key_size); + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + return 0; +} + +static int virtnet_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key, const u8 hfunc) +{ + struct virtnet_info *vi = netdev_priv(dev); + int i; + + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) + return -EOPNOTSUPP; + + if (indir) { + for (i = 0; i < vi->rss_indir_table_size; ++i) + vi->ctrl->rss.indirection_table[i] = indir[i]; + } + if (key) + memcpy(vi->ctrl->rss.key, key, vi->rss_key_size); + + virtnet_commit_rss_command(vi); + + return 0; +} + +static int virtnet_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, u32 *rule_locs) +{ + struct virtnet_info *vi = netdev_priv(dev); + int rc = 0; + + switch (info->cmd) { + case ETHTOOL_GRXRINGS: + info->data = vi->curr_queue_pairs; + break; + case ETHTOOL_GRXFH: + virtnet_get_hashflow(vi, info); + break; + default: + rc = -EOPNOTSUPP; + } + + return rc; +} + +static int virtnet_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info) +{ + struct virtnet_info *vi = netdev_priv(dev); + int rc = 0; + + switch (info->cmd) { + case ETHTOOL_SRXFH: + if (!virtnet_set_hashflow(vi, info)) + rc = -EINVAL; + + break; + default: + rc = -EOPNOTSUPP; + } + + return rc; +} + +static const struct ethtool_ops virtnet_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_MAX_FRAMES | + ETHTOOL_COALESCE_USECS, + .get_drvinfo = virtnet_get_drvinfo, + .get_link = ethtool_op_get_link, + .get_ringparam = virtnet_get_ringparam, + .set_ringparam = virtnet_set_ringparam, + .get_strings = virtnet_get_strings, + .get_sset_count = virtnet_get_sset_count, + .get_ethtool_stats = virtnet_get_ethtool_stats, + .set_channels = virtnet_set_channels, + .get_channels = virtnet_get_channels, + .get_ts_info = ethtool_op_get_ts_info, + .get_link_ksettings = virtnet_get_link_ksettings, + .set_link_ksettings = virtnet_set_link_ksettings, + .set_coalesce = virtnet_set_coalesce, + .get_coalesce = virtnet_get_coalesce, + .get_rxfh_key_size = virtnet_get_rxfh_key_size, + .get_rxfh_indir_size = virtnet_get_rxfh_indir_size, + .get_rxfh = virtnet_get_rxfh, + .set_rxfh = virtnet_set_rxfh, + .get_rxnfc = virtnet_get_rxnfc, + .set_rxnfc = virtnet_set_rxnfc, +}; + +const struct ethtool_ops *virtnet_get_ethtool_ops(void) +{ + return &virtnet_ethtool_ops; +} diff --git a/drivers/net/virtio/virtnet_ethtool.h b/drivers/net/virtio/virtnet_ethtool.h new file mode 100644 index 000000000000..ed1b7a4877e0 --- /dev/null +++ b/drivers/net/virtio/virtnet_ethtool.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __VIRNET_ETHTOOL_H__ +#define __VIRNET_ETHTOOL_H__ + +void virtnet_rq_update_stats(struct virtnet_rq *rq, struct virtnet_rq_stats *stats); +const struct ethtool_ops *virtnet_get_ethtool_ops(void); +#endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 11/16] virtio_net: introduce virtnet_dev_rx_queue_group()
Adding an API to set sysfs_rx_queue_group. This is prepare for separating the virtio-related funcs. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 15 +++++++++++---- drivers/net/virtio/virtnet.h | 1 + 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 1323c6733f56..3f58af7d1550 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -2661,6 +2661,16 @@ static const struct attribute_group virtio_net_mrg_rx_group = { .name = "virtio_net", .attrs = virtio_net_mrg_rx_attrs }; + +void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev) +{ + if (vi->mergeable_rx_bufs) + dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group; +} +#else +void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev) +{ +} #endif static bool virtnet_fail_on_feature(struct virtio_device *vdev, @@ -2943,10 +2953,7 @@ static int virtnet_probe(struct virtio_device *vdev) if (err) goto free; -#ifdef CONFIG_SYSFS - if (vi->mergeable_rx_bufs) - dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group; -#endif + virtnet_dev_rx_queue_group(vi, dev); netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs); netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index b889825c54d0..48e0c5ba346a 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -184,4 +184,5 @@ struct virtnet_info { int virtnet_rx_resize(struct virtnet_info *vi, struct virtnet_rq *rq, u32 ring_num); int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, u32 ring_num); int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs); +void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev); #endif -- 2.32.0.3.g01195cf9f
Adding an API to get netdev_ops. Avoid to use the netdev_ops directly. This is prepare for separating the virtio-related funcs. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 11 ++++++++--- drivers/net/virtio/virtnet.h | 1 + 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 3f58af7d1550..5f508d9500f3 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -2054,7 +2054,7 @@ static void virtnet_freeze_down(struct virtio_device *vdev) netif_device_detach(vi->dev); netif_tx_unlock_bh(vi->dev); if (netif_running(vi->dev)) - virtnet_close(vi->dev); + virtnet_get_netdev()->ndo_stop(vi->dev); } static int init_vqs(struct virtnet_info *vi); @@ -2073,7 +2073,7 @@ static int virtnet_restore_up(struct virtio_device *vdev) enable_delayed_refill(vi); if (netif_running(vi->dev)) { - err = virtnet_open(vi->dev); + err = virtnet_get_netdev()->ndo_open(vi->dev); if (err) return err; } @@ -2319,6 +2319,11 @@ static const struct net_device_ops virtnet_netdev = { .ndo_tx_timeout = virtnet_tx_timeout, }; +const struct net_device_ops *virtnet_get_netdev(void) +{ + return &virtnet_netdev; +} + static void virtnet_config_changed_work(struct work_struct *work) { struct virtnet_info *vi @@ -2796,7 +2801,7 @@ static int virtnet_probe(struct virtio_device *vdev) /* Set up network device as normal. */ dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE | IFF_TX_SKB_NO_LINEAR; - dev->netdev_ops = &virtnet_netdev; + dev->netdev_ops = virtnet_get_netdev(); dev->features = NETIF_F_HIGHDMA; dev->ethtool_ops = virtnet_get_ethtool_ops(); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 48e0c5ba346a..269ddc386418 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -185,4 +185,5 @@ int virtnet_rx_resize(struct virtnet_info *vi, struct virtnet_rq *rq, u32 ring_n int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, u32 ring_num); int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs); void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev); +const struct net_device_ops *virtnet_get_netdev(void); #endif -- 2.32.0.3.g01195cf9f
Put some functions or macro into the header file. This is prepare for separating the virtio-related funcs. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 43 +++++++++++++++++++----------------- drivers/net/virtio/virtnet.h | 7 ++++++ 2 files changed, 30 insertions(+), 20 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 5f508d9500f3..8f281a7f9d7a 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -33,8 +33,6 @@ module_param(csum, bool, 0444); module_param(gso, bool, 0444); module_param(napi_tx, bool, 0644); -/* FIXME: MTU in config. */ -#define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN) #define GOOD_COPY_LEN 128 static const unsigned long guest_offloads[] = { @@ -175,7 +173,7 @@ static void virtqueue_napi_complete(struct napi_struct *napi, } } -static void skb_xmit_done(struct virtqueue *vq) +void virtnet_skb_xmit_done(struct virtqueue *vq) { struct virtnet_info *vi = vq->vdev->priv; struct napi_struct *napi = &vi->sq[vq2txq(vq)].napi; @@ -635,7 +633,7 @@ static struct sk_buff *receive_small(struct net_device *dev, unsigned int xdp_headroom = (unsigned long)ctx; unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom; unsigned int headroom = vi->hdr_len + header_offset; - unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + unsigned int buflen = SKB_DATA_ALIGN(VIRTNET_GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); struct page *page = virt_to_head_page(buf); unsigned int delta = 0; @@ -646,9 +644,9 @@ static struct sk_buff *receive_small(struct net_device *dev, len -= vi->hdr_len; stats->bytes += len; - if (unlikely(len > GOOD_PACKET_LEN)) { + if (unlikely(len > VIRTNET_GOOD_PACKET_LEN)) { pr_debug("%s: rx error: len %u exceeds max size %d\n", - dev->name, len, GOOD_PACKET_LEN); + dev->name, len, VIRTNET_GOOD_PACKET_LEN); dev->stats.rx_length_errors++; goto err; } @@ -678,7 +676,7 @@ static struct sk_buff *receive_small(struct net_device *dev, xdp_headroom = virtnet_get_headroom(vi); header_offset = VIRTNET_RX_PAD + xdp_headroom; headroom = vi->hdr_len + header_offset; - buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + + buflen = SKB_DATA_ALIGN(VIRTNET_GOOD_PACKET_LEN + headroom) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); xdp_page = xdp_linearize_page(rq, &num_buf, page, offset, header_offset, @@ -1286,7 +1284,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq, char *buf; unsigned int xdp_headroom = virtnet_get_headroom(vi); void *ctx = (void *)(unsigned long)xdp_headroom; - int len = vi->hdr_len + VIRTNET_RX_PAD + GOOD_PACKET_LEN + xdp_headroom; + int len = vi->hdr_len + VIRTNET_RX_PAD + VIRTNET_GOOD_PACKET_LEN + xdp_headroom; int err; len = SKB_DATA_ALIGN(len) + @@ -1298,7 +1296,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq, get_page(alloc_frag->page); alloc_frag->offset += len; sg_init_one(rq->sg, buf + VIRTNET_RX_PAD + xdp_headroom, - vi->hdr_len + GOOD_PACKET_LEN); + vi->hdr_len + VIRTNET_GOOD_PACKET_LEN); err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp); if (err < 0) put_page(virt_to_head_page(buf)); @@ -1421,7 +1419,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, * Returns false if we couldn't fill entirely (OOM). * * Normally run in the receive path, but can also be run from ndo_open - * before we're receiving packets, or from refill_work which is + * before we're receiving packets, or from virtnet_refill_work which is * careful to disable receiving (using napi_disable). */ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, @@ -1453,7 +1451,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq, return !oom; } -static void skb_recv_done(struct virtqueue *rvq) +void virtnet_skb_recv_done(struct virtqueue *rvq) { struct virtnet_info *vi = rvq->vdev->priv; struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)]; @@ -1498,7 +1496,7 @@ static void virtnet_napi_tx_disable(struct napi_struct *napi) napi_disable(napi); } -static void refill_work(struct work_struct *work) +void virtnet_refill_work(struct work_struct *work) { struct virtnet_info *vi container_of(work, struct virtnet_info, refill.work); @@ -1982,7 +1980,7 @@ static int virtnet_close(struct net_device *dev) /* Make sure NAPI doesn't schedule refill work */ disable_delayed_refill(vi); - /* Make sure refill_work doesn't re-enable napi! */ + /* Make sure virtnet_refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); for (i = 0; i < vi->max_queue_pairs; i++) { @@ -2480,7 +2478,7 @@ static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqu unsigned int min_buf_len = DIV_ROUND_UP(buf_len, rq_size); return max(max(min_buf_len, hdr_len) - hdr_len, - (unsigned int)GOOD_PACKET_LEN); + (unsigned int)VIRTNET_GOOD_PACKET_LEN); } static int virtnet_find_vqs(struct virtnet_info *vi) @@ -2525,8 +2523,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi) /* Allocate/initialize parameters for send/receive virtqueues */ for (i = 0; i < vi->max_queue_pairs; i++) { - callbacks[rxq2vq(i)] = skb_recv_done; - callbacks[txq2vq(i)] = skb_xmit_done; + callbacks[rxq2vq(i)] = virtnet_skb_recv_done; + callbacks[txq2vq(i)] = virtnet_skb_xmit_done; sprintf(vi->rq[i].name, "input.%d", i); sprintf(vi->sq[i].name, "output.%d", i); names[rxq2vq(i)] = vi->rq[i].name; @@ -2585,7 +2583,7 @@ static int virtnet_alloc_queues(struct virtnet_info *vi) if (!vi->rq) goto err_rq; - INIT_DELAYED_WORK(&vi->refill, refill_work); + INIT_DELAYED_WORK(&vi->refill, virtnet_refill_work); for (i = 0; i < vi->max_queue_pairs; i++) { vi->rq[i].pages = NULL; netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll, @@ -3045,14 +3043,19 @@ static int virtnet_probe(struct virtio_device *vdev) return err; } -static void remove_vq_common(struct virtnet_info *vi) +void virtnet_free_bufs(struct virtnet_info *vi) { - virtio_reset_device(vi->vdev); - /* Free unused buffers in both send and recv, if any. */ free_unused_bufs(vi); free_receive_bufs(vi); +} + +static void remove_vq_common(struct virtnet_info *vi) +{ + virtio_reset_device(vi->vdev); + + virtnet_free_bufs(vi); free_receive_page_frags(vi); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 269ddc386418..1315dcf52f1b 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -26,6 +26,9 @@ DECLARE_EWMA(pkt_len, 0, 64) #define VIRTNET_DRIVER_VERSION "1.0.0" +/* FIXME: MTU in config. */ +#define VIRTNET_GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN) + struct virtnet_sq_stats { struct u64_stats_sync syncp; u64 packets; @@ -186,4 +189,8 @@ int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq, u32 ring_n int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs); void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev); const struct net_device_ops *virtnet_get_netdev(void); +void virtnet_skb_xmit_done(struct virtqueue *vq); +void virtnet_skb_recv_done(struct virtqueue *rvq); +void virtnet_refill_work(struct work_struct *work); +void virtnet_free_bufs(struct virtnet_info *vi); #endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 14/16] virtio_net: move virtnet_[en/dis]able_delayed_refill to header file
Move virtnet_[en/dis]able_delayed_refill to header file. This is prepare for separating virtio-related funcs. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 20 +++----------------- drivers/net/virtio/virtnet.h | 15 +++++++++++++++ 2 files changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 8f281a7f9d7a..75a74864c3fe 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -136,20 +136,6 @@ static struct page *get_a_page(struct virtnet_rq *rq, gfp_t gfp_mask) return p; } -static void enable_delayed_refill(struct virtnet_info *vi) -{ - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled = true; - spin_unlock_bh(&vi->refill_lock); -} - -static void disable_delayed_refill(struct virtnet_info *vi) -{ - spin_lock_bh(&vi->refill_lock); - vi->refill_enabled = false; - spin_unlock_bh(&vi->refill_lock); -} - static void virtqueue_napi_schedule(struct napi_struct *napi, struct virtqueue *vq) { @@ -1622,7 +1608,7 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i, err; - enable_delayed_refill(vi); + virtnet_enable_delayed_refill(vi); for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) @@ -1979,7 +1965,7 @@ static int virtnet_close(struct net_device *dev) int i; /* Make sure NAPI doesn't schedule refill work */ - disable_delayed_refill(vi); + virtnet_disable_delayed_refill(vi); /* Make sure virtnet_refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); @@ -2068,7 +2054,7 @@ static int virtnet_restore_up(struct virtio_device *vdev) virtio_device_ready(vdev); - enable_delayed_refill(vi); + virtnet_enable_delayed_refill(vi); if (netif_running(vi->dev)) { err = virtnet_get_netdev()->ndo_open(vi->dev); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 1315dcf52f1b..5f20e9103a0e 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -193,4 +193,19 @@ void virtnet_skb_xmit_done(struct virtqueue *vq); void virtnet_skb_recv_done(struct virtqueue *rvq); void virtnet_refill_work(struct work_struct *work); void virtnet_free_bufs(struct virtnet_info *vi); + +static inline void virtnet_enable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = true; + spin_unlock_bh(&vi->refill_lock); +} + +static inline void virtnet_disable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = false; + spin_unlock_bh(&vi->refill_lock); +} + #endif -- 2.32.0.3.g01195cf9f
Xuan Zhuo
2023-Mar-28 09:28 UTC
[PATCH 15/16] virtio_net: add APIs to register/unregister virtio driver
This is prepare for separating vortio-related funcs. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/virtnet.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 75a74864c3fe..02989cace0fb 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -3146,6 +3146,16 @@ static struct virtio_driver virtio_net_driver = { #endif }; +int virtnet_register_virtio_driver(void) +{ + return register_virtio_driver(&virtio_net_driver); +} + +void virtnet_unregister_virtio_driver(void) +{ + unregister_virtio_driver(&virtio_net_driver); +} + static __init int virtio_net_driver_init(void) { int ret; @@ -3154,7 +3164,7 @@ static __init int virtio_net_driver_init(void) if (ret) return ret; - ret = register_virtio_driver(&virtio_net_driver); + ret = virtnet_register_virtio_driver(); if (ret) { virtnet_cpuhp_remove(); return ret; @@ -3166,7 +3176,7 @@ module_init(virtio_net_driver_init); static __exit void virtio_net_driver_exit(void) { - unregister_virtio_driver(&virtio_net_driver); + virtnet_unregister_virtio_driver(); virtnet_cpuhp_remove(); } module_exit(virtio_net_driver_exit); -- 2.32.0.3.g01195cf9f
Moving virtio-related functions such as virtio callbacks, virtio driver register to a separate file. Signed-off-by: Xuan Zhuo <xuanzhuo at linux.alibaba.com> --- drivers/net/virtio/Makefile | 3 +- drivers/net/virtio/virtnet.c | 884 +--------------------------- drivers/net/virtio/virtnet.h | 2 + drivers/net/virtio/virtnet_virtio.c | 880 +++++++++++++++++++++++++++ drivers/net/virtio/virtnet_virtio.h | 8 + 5 files changed, 895 insertions(+), 882 deletions(-) create mode 100644 drivers/net/virtio/virtnet_virtio.c create mode 100644 drivers/net/virtio/virtnet_virtio.h diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index 9b35fb00d6c7..6bdc870fa1c8 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -6,4 +6,5 @@ obj-$(CONFIG_VIRTIO_NET) += virtio_net.o virtio_net-y := virtnet.o virtnet_common.o virtnet_ctrl.o \ - virtnet_ethtool.o + virtnet_ethtool.o \ + virtnet_virtio.o diff --git a/drivers/net/virtio/virtnet.c b/drivers/net/virtio/virtnet.c index 02989cace0fb..ca9d3073ba93 100644 --- a/drivers/net/virtio/virtnet.c +++ b/drivers/net/virtio/virtnet.c @@ -4,48 +4,18 @@ * Copyright 2007 Rusty Russell <rusty at rustcorp.com.au> IBM Corporation */ //#define DEBUG -#include <linux/netdevice.h> -#include <linux/etherdevice.h> -#include <linux/module.h> -#include <linux/virtio.h> #include <linux/virtio_net.h> -#include <linux/bpf.h> #include <linux/bpf_trace.h> -#include <linux/scatterlist.h> -#include <linux/if_vlan.h> -#include <linux/slab.h> -#include <linux/filter.h> -#include <linux/kernel.h> -#include <net/route.h> #include <net/xdp.h> -#include <net/net_failover.h> #include "virtnet.h" #include "virtnet_common.h" #include "virtnet_ctrl.h" #include "virtnet_ethtool.h" - -static int napi_weight = NAPI_POLL_WEIGHT; -module_param(napi_weight, int, 0444); - -static bool csum = true, gso = true, napi_tx = true; -module_param(csum, bool, 0444); -module_param(gso, bool, 0444); -module_param(napi_tx, bool, 0644); +#include "virtnet_virtio.h" #define GOOD_COPY_LEN 128 -static const unsigned long guest_offloads[] = { - VIRTIO_NET_F_GUEST_TSO4, - VIRTIO_NET_F_GUEST_TSO6, - VIRTIO_NET_F_GUEST_ECN, - VIRTIO_NET_F_GUEST_UFO, - VIRTIO_NET_F_GUEST_CSUM, - VIRTIO_NET_F_GUEST_USO4, - VIRTIO_NET_F_GUEST_USO6, - VIRTIO_NET_F_GUEST_HDRLEN -}; - #define GUEST_OFFLOAD_GRO_HW_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \ (1ULL << VIRTIO_NET_F_GUEST_TSO6) | \ (1ULL << VIRTIO_NET_F_GUEST_ECN) | \ @@ -89,21 +59,11 @@ static int vq2txq(struct virtqueue *vq) return (vq->index - 1) / 2; } -static int txq2vq(int txq) -{ - return txq * 2 + 1; -} - static int vq2rxq(struct virtqueue *vq) { return vq->index / 2; } -static int rxq2vq(int rxq) -{ - return rxq * 2; -} - static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb) { return (struct virtio_net_hdr_mrg_rxbuf *)skb->cb; @@ -1570,7 +1530,7 @@ static void virtnet_poll_cleantx(struct virtnet_rq *rq) } } -static int virtnet_poll(struct napi_struct *napi, int budget) +int virtnet_poll(struct napi_struct *napi, int budget) { struct virtnet_rq *rq container_of(napi, struct virtnet_rq, napi); @@ -1634,7 +1594,7 @@ static int virtnet_open(struct net_device *dev) return 0; } -static int virtnet_poll_tx(struct napi_struct *napi, int budget) +int virtnet_poll_tx(struct napi_struct *napi, int budget) { struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi); struct virtnet_info *vi = sq->vq->vdev->priv; @@ -1949,16 +1909,6 @@ int _virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) return 0; } -static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) -{ - int err; - - rtnl_lock(); - err = _virtnet_set_queues(vi, queue_pairs); - rtnl_unlock(); - return err; -} - static int virtnet_close(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -1978,96 +1928,6 @@ static int virtnet_close(struct net_device *dev) return 0; } -static void virtnet_init_default_rss(struct virtnet_info *vi) -{ - u32 indir_val = 0; - int i = 0; - - vi->ctrl->rss.hash_types = vi->rss_hash_types_supported; - vi->rss_hash_types_saved = vi->rss_hash_types_supported; - vi->ctrl->rss.indirection_table_mask = vi->rss_indir_table_size - ? vi->rss_indir_table_size - 1 : 0; - vi->ctrl->rss.unclassified_queue = 0; - - for (; i < vi->rss_indir_table_size; ++i) { - indir_val = ethtool_rxfh_indir_default(i, vi->curr_queue_pairs); - vi->ctrl->rss.indirection_table[i] = indir_val; - } - - vi->ctrl->rss.max_tx_vq = vi->curr_queue_pairs; - vi->ctrl->rss.hash_key_length = vi->rss_key_size; - - netdev_rss_key_fill(vi->ctrl->rss.key, vi->rss_key_size); -} - -static void virtnet_init_settings(struct net_device *dev) -{ - struct virtnet_info *vi = netdev_priv(dev); - - vi->speed = SPEED_UNKNOWN; - vi->duplex = DUPLEX_UNKNOWN; -} - -static void virtnet_update_settings(struct virtnet_info *vi) -{ - u32 speed; - u8 duplex; - - if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_SPEED_DUPLEX)) - return; - - virtio_cread_le(vi->vdev, struct virtio_net_config, speed, &speed); - - if (ethtool_validate_speed(speed)) - vi->speed = speed; - - virtio_cread_le(vi->vdev, struct virtio_net_config, duplex, &duplex); - - if (ethtool_validate_duplex(duplex)) - vi->duplex = duplex; -} - -static void virtnet_freeze_down(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - - /* Make sure no work handler is accessing the device */ - flush_work(&vi->config_work); - - netif_tx_lock_bh(vi->dev); - netif_device_detach(vi->dev); - netif_tx_unlock_bh(vi->dev); - if (netif_running(vi->dev)) - virtnet_get_netdev()->ndo_stop(vi->dev); -} - -static int init_vqs(struct virtnet_info *vi); - -static int virtnet_restore_up(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - int err; - - err = init_vqs(vi); - if (err) - return err; - - virtio_device_ready(vdev); - - virtnet_enable_delayed_refill(vi); - - if (netif_running(vi->dev)) { - err = virtnet_get_netdev()->ndo_open(vi->dev); - if (err) - return err; - } - - netif_tx_lock_bh(vi->dev); - netif_device_attach(vi->dev); - netif_tx_unlock_bh(vi->dev); - return err; -} - static int virtnet_clear_guest_offloads(struct virtnet_info *vi) { u64 offloads = 0; @@ -2308,68 +2168,6 @@ const struct net_device_ops *virtnet_get_netdev(void) return &virtnet_netdev; } -static void virtnet_config_changed_work(struct work_struct *work) -{ - struct virtnet_info *vi - container_of(work, struct virtnet_info, config_work); - u16 v; - - if (virtio_cread_feature(vi->vdev, VIRTIO_NET_F_STATUS, - struct virtio_net_config, status, &v) < 0) - return; - - if (v & VIRTIO_NET_S_ANNOUNCE) { - netdev_notify_peers(vi->dev); - - rtnl_lock(); - virtnet_ack_link_announce(vi); - rtnl_unlock(); - } - - /* Ignore unknown (future) status bits */ - v &= VIRTIO_NET_S_LINK_UP; - - if (vi->status == v) - return; - - vi->status = v; - - if (vi->status & VIRTIO_NET_S_LINK_UP) { - virtnet_update_settings(vi); - netif_carrier_on(vi->dev); - netif_tx_wake_all_queues(vi->dev); - } else { - netif_carrier_off(vi->dev); - netif_tx_stop_all_queues(vi->dev); - } -} - -static void virtnet_config_changed(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - - schedule_work(&vi->config_work); -} - -static void virtnet_free_queues(struct virtnet_info *vi) -{ - int i; - - for (i = 0; i < vi->max_queue_pairs; i++) { - __netif_napi_del(&vi->rq[i].napi); - __netif_napi_del(&vi->sq[i].napi); - } - - /* We called __netif_napi_del(), - * we need to respect an RCU grace period before freeing vi->rq - */ - synchronize_net(); - - kfree(vi->rq); - kfree(vi->sq); - kfree(vi->ctrl); -} - static void _free_receive_bufs(struct virtnet_info *vi) { struct bpf_prog *old_prog; @@ -2393,14 +2191,6 @@ static void free_receive_bufs(struct virtnet_info *vi) rtnl_unlock(); } -static void free_receive_page_frags(struct virtnet_info *vi) -{ - int i; - for (i = 0; i < vi->max_queue_pairs; i++) - if (vi->rq[i].alloc_frag.page) - put_page(vi->rq[i].alloc_frag.page); -} - static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf) { if (!is_xdp_frame(buf)) @@ -2440,187 +2230,6 @@ static void free_unused_bufs(struct virtnet_info *vi) } } -static void virtnet_del_vqs(struct virtnet_info *vi) -{ - struct virtio_device *vdev = vi->vdev; - - virtnet_clean_affinity(vi); - - vdev->config->del_vqs(vdev); - - virtnet_free_queues(vi); -} - -/* How large should a single buffer be so a queue full of these can fit at - * least one full packet? - * Logic below assumes the mergeable buffer header is used. - */ -static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqueue *vq) -{ - const unsigned int hdr_len = vi->hdr_len; - unsigned int rq_size = virtqueue_get_vring_size(vq); - unsigned int packet_len = vi->big_packets ? IP_MAX_MTU : vi->dev->max_mtu; - unsigned int buf_len = hdr_len + ETH_HLEN + VLAN_HLEN + packet_len; - unsigned int min_buf_len = DIV_ROUND_UP(buf_len, rq_size); - - return max(max(min_buf_len, hdr_len) - hdr_len, - (unsigned int)VIRTNET_GOOD_PACKET_LEN); -} - -static int virtnet_find_vqs(struct virtnet_info *vi) -{ - vq_callback_t **callbacks; - struct virtqueue **vqs; - int ret = -ENOMEM; - int i, total_vqs; - const char **names; - bool *ctx; - - /* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by - * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by - * possible control vq. - */ - total_vqs = vi->max_queue_pairs * 2 + - virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ); - - /* Allocate space for find_vqs parameters */ - vqs = kcalloc(total_vqs, sizeof(*vqs), GFP_KERNEL); - if (!vqs) - goto err_vq; - callbacks = kmalloc_array(total_vqs, sizeof(*callbacks), GFP_KERNEL); - if (!callbacks) - goto err_callback; - names = kmalloc_array(total_vqs, sizeof(*names), GFP_KERNEL); - if (!names) - goto err_names; - if (!vi->big_packets || vi->mergeable_rx_bufs) { - ctx = kcalloc(total_vqs, sizeof(*ctx), GFP_KERNEL); - if (!ctx) - goto err_ctx; - } else { - ctx = NULL; - } - - /* Parameters for control virtqueue, if any */ - if (vi->has_cvq) { - callbacks[total_vqs - 1] = NULL; - names[total_vqs - 1] = "control"; - } - - /* Allocate/initialize parameters for send/receive virtqueues */ - for (i = 0; i < vi->max_queue_pairs; i++) { - callbacks[rxq2vq(i)] = virtnet_skb_recv_done; - callbacks[txq2vq(i)] = virtnet_skb_xmit_done; - sprintf(vi->rq[i].name, "input.%d", i); - sprintf(vi->sq[i].name, "output.%d", i); - names[rxq2vq(i)] = vi->rq[i].name; - names[txq2vq(i)] = vi->sq[i].name; - if (ctx) - ctx[rxq2vq(i)] = true; - } - - ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks, - names, ctx, NULL); - if (ret) - goto err_find; - - if (vi->has_cvq) { - vi->cvq = vqs[total_vqs - 1]; - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VLAN)) - vi->dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; - } - - for (i = 0; i < vi->max_queue_pairs; i++) { - vi->rq[i].vq = vqs[rxq2vq(i)]; - vi->rq[i].min_buf_len = mergeable_min_buf_len(vi, vi->rq[i].vq); - vi->sq[i].vq = vqs[txq2vq(i)]; - } - - /* run here: ret == 0. */ - - -err_find: - kfree(ctx); -err_ctx: - kfree(names); -err_names: - kfree(callbacks); -err_callback: - kfree(vqs); -err_vq: - return ret; -} - -static int virtnet_alloc_queues(struct virtnet_info *vi) -{ - int i; - - if (vi->has_cvq) { - vi->ctrl = kzalloc(sizeof(*vi->ctrl), GFP_KERNEL); - if (!vi->ctrl) - goto err_ctrl; - } else { - vi->ctrl = NULL; - } - vi->sq = kcalloc(vi->max_queue_pairs, sizeof(*vi->sq), GFP_KERNEL); - if (!vi->sq) - goto err_sq; - vi->rq = kcalloc(vi->max_queue_pairs, sizeof(*vi->rq), GFP_KERNEL); - if (!vi->rq) - goto err_rq; - - INIT_DELAYED_WORK(&vi->refill, virtnet_refill_work); - for (i = 0; i < vi->max_queue_pairs; i++) { - vi->rq[i].pages = NULL; - netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll, - napi_weight); - netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi, - virtnet_poll_tx, - napi_tx ? napi_weight : 0); - - sg_init_table(vi->rq[i].sg, ARRAY_SIZE(vi->rq[i].sg)); - ewma_pkt_len_init(&vi->rq[i].mrg_avg_pkt_len); - sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg)); - - u64_stats_init(&vi->rq[i].stats.syncp); - u64_stats_init(&vi->sq[i].stats.syncp); - } - - return 0; - -err_rq: - kfree(vi->sq); -err_sq: - kfree(vi->ctrl); -err_ctrl: - return -ENOMEM; -} - -static int init_vqs(struct virtnet_info *vi) -{ - int ret; - - /* Allocate send & receive queues */ - ret = virtnet_alloc_queues(vi); - if (ret) - goto err; - - ret = virtnet_find_vqs(vi); - if (ret) - goto err_free; - - cpus_read_lock(); - virtnet_set_affinity(vi); - cpus_read_unlock(); - - return 0; - -err_free: - virtnet_free_queues(vi); -err: - return ret; -} - #ifdef CONFIG_SYSFS static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue, char *buf) @@ -2662,373 +2271,6 @@ void virtnet_dev_rx_queue_group(struct virtnet_info *vi, struct net_device *dev) } #endif -static bool virtnet_fail_on_feature(struct virtio_device *vdev, - unsigned int fbit, - const char *fname, const char *dname) -{ - if (!virtio_has_feature(vdev, fbit)) - return false; - - dev_err(&vdev->dev, "device advertises feature %s but not %s", - fname, dname); - - return true; -} - -#define VIRTNET_FAIL_ON(vdev, fbit, dbit) \ - virtnet_fail_on_feature(vdev, fbit, #fbit, dbit) - -static bool virtnet_validate_features(struct virtio_device *vdev) -{ - if (!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) && - (VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_RX, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_VLAN, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_RSS, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_HASH_REPORT, - "VIRTIO_NET_F_CTRL_VQ") || - VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_NOTF_COAL, - "VIRTIO_NET_F_CTRL_VQ"))) { - return false; - } - - return true; -} - -#define MIN_MTU ETH_MIN_MTU -#define MAX_MTU ETH_MAX_MTU - -static int virtnet_validate(struct virtio_device *vdev) -{ - if (!vdev->config->get) { - dev_err(&vdev->dev, "%s failure: config access disabled\n", - __func__); - return -EINVAL; - } - - if (!virtnet_validate_features(vdev)) - return -EINVAL; - - if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) { - int mtu = virtio_cread16(vdev, - offsetof(struct virtio_net_config, - mtu)); - if (mtu < MIN_MTU) - __virtio_clear_bit(vdev, VIRTIO_NET_F_MTU); - } - - if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY) && - !virtio_has_feature(vdev, VIRTIO_NET_F_MAC)) { - dev_warn(&vdev->dev, "device advertises feature VIRTIO_NET_F_STANDBY but not VIRTIO_NET_F_MAC, disabling standby"); - __virtio_clear_bit(vdev, VIRTIO_NET_F_STANDBY); - } - - return 0; -} - -static bool virtnet_check_guest_gso(const struct virtnet_info *vi) -{ - return virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) || - (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_USO4) && - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_USO6)); -} - -static void virtnet_set_big_packets(struct virtnet_info *vi, const int mtu) -{ - bool guest_gso = virtnet_check_guest_gso(vi); - - /* If device can receive ANY guest GSO packets, regardless of mtu, - * allocate packets of maximum size, otherwise limit it to only - * mtu size worth only. - */ - if (mtu > ETH_DATA_LEN || guest_gso) { - vi->big_packets = true; - vi->big_packets_num_skbfrags = guest_gso ? MAX_SKB_FRAGS : DIV_ROUND_UP(mtu, PAGE_SIZE); - } -} - -static int virtnet_probe(struct virtio_device *vdev) -{ - int i, err = -ENOMEM; - struct net_device *dev; - struct virtnet_info *vi; - u16 max_queue_pairs; - int mtu = 0; - - /* Find if host supports multiqueue/rss virtio_net device */ - max_queue_pairs = 1; - if (virtio_has_feature(vdev, VIRTIO_NET_F_MQ) || virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) - max_queue_pairs - virtio_cread16(vdev, offsetof(struct virtio_net_config, max_virtqueue_pairs)); - - /* We need at least 2 queue's */ - if (max_queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN || - max_queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX || - !virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) - max_queue_pairs = 1; - - /* Allocate ourselves a network device with room for our info */ - dev = alloc_etherdev_mq(sizeof(struct virtnet_info), max_queue_pairs); - if (!dev) - return -ENOMEM; - - /* Set up network device as normal. */ - dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE | - IFF_TX_SKB_NO_LINEAR; - dev->netdev_ops = virtnet_get_netdev(); - dev->features = NETIF_F_HIGHDMA; - - dev->ethtool_ops = virtnet_get_ethtool_ops(); - SET_NETDEV_DEV(dev, &vdev->dev); - - /* Do we support "hardware" checksums? */ - if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) { - /* This opens up the world of extra features. */ - dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG; - if (csum) - dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG; - - if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) { - dev->hw_features |= NETIF_F_TSO - | NETIF_F_TSO_ECN | NETIF_F_TSO6; - } - /* Individual feature bits: what can host handle? */ - if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4)) - dev->hw_features |= NETIF_F_TSO; - if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO6)) - dev->hw_features |= NETIF_F_TSO6; - if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN)) - dev->hw_features |= NETIF_F_TSO_ECN; - if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_USO)) - dev->hw_features |= NETIF_F_GSO_UDP_L4; - - dev->features |= NETIF_F_GSO_ROBUST; - - if (gso) - dev->features |= dev->hw_features & NETIF_F_ALL_TSO; - /* (!csum && gso) case will be fixed by register_netdev() */ - } - if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) - dev->features |= NETIF_F_RXCSUM; - if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || - virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) - dev->features |= NETIF_F_GRO_HW; - if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) - dev->hw_features |= NETIF_F_GRO_HW; - - dev->vlan_features = dev->features; - dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; - - /* MTU range: 68 - 65535 */ - dev->min_mtu = MIN_MTU; - dev->max_mtu = MAX_MTU; - - /* Configuration may specify what MAC to use. Otherwise random. */ - if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC)) { - u8 addr[ETH_ALEN]; - - virtio_cread_bytes(vdev, - offsetof(struct virtio_net_config, mac), - addr, ETH_ALEN); - eth_hw_addr_set(dev, addr); - } else { - eth_hw_addr_random(dev); - dev_info(&vdev->dev, "Assigned random MAC address %pM\n", - dev->dev_addr); - } - - /* Set up our device-specific information */ - vi = netdev_priv(dev); - vi->dev = dev; - vi->vdev = vdev; - vdev->priv = vi; - - INIT_WORK(&vi->config_work, virtnet_config_changed_work); - spin_lock_init(&vi->refill_lock); - - if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { - vi->mergeable_rx_bufs = true; - dev->xdp_features |= NETDEV_XDP_ACT_RX_SG; - } - - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { - vi->rx_usecs = 0; - vi->tx_usecs = 0; - vi->tx_max_packets = 0; - vi->rx_max_packets = 0; - } - - if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) - vi->has_rss_hash_report = true; - - if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) - vi->has_rss = true; - - if (vi->has_rss || vi->has_rss_hash_report) { - vi->rss_indir_table_size - virtio_cread16(vdev, offsetof(struct virtio_net_config, - rss_max_indirection_table_length)); - vi->rss_key_size - virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size)); - - vi->rss_hash_types_supported - virtio_cread32(vdev, offsetof(struct virtio_net_config, supported_hash_types)); - vi->rss_hash_types_supported &- ~(VIRTIO_NET_RSS_HASH_TYPE_IP_EX | - VIRTIO_NET_RSS_HASH_TYPE_TCP_EX | - VIRTIO_NET_RSS_HASH_TYPE_UDP_EX); - - dev->hw_features |= NETIF_F_RXHASH; - } - - if (vi->has_rss_hash_report) - vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash); - else if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) || - virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) - vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf); - else - vi->hdr_len = sizeof(struct virtio_net_hdr); - - if (virtio_has_feature(vdev, VIRTIO_F_ANY_LAYOUT) || - virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) - vi->any_header_sg = true; - - if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) - vi->has_cvq = true; - - if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) { - mtu = virtio_cread16(vdev, - offsetof(struct virtio_net_config, - mtu)); - if (mtu < dev->min_mtu) { - /* Should never trigger: MTU was previously validated - * in virtnet_validate. - */ - dev_err(&vdev->dev, - "device MTU appears to have changed it is now %d < %d", - mtu, dev->min_mtu); - err = -EINVAL; - goto free; - } - - dev->mtu = mtu; - dev->max_mtu = mtu; - } - - virtnet_set_big_packets(vi, mtu); - - if (vi->any_header_sg) - dev->needed_headroom = vi->hdr_len; - - /* Enable multiqueue by default */ - if (num_online_cpus() >= max_queue_pairs) - vi->curr_queue_pairs = max_queue_pairs; - else - vi->curr_queue_pairs = num_online_cpus(); - vi->max_queue_pairs = max_queue_pairs; - - /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ - err = init_vqs(vi); - if (err) - goto free; - - virtnet_dev_rx_queue_group(vi, dev); - netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs); - netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs); - - virtnet_init_settings(dev); - - if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY)) { - vi->failover = net_failover_create(vi->dev); - if (IS_ERR(vi->failover)) { - err = PTR_ERR(vi->failover); - goto free_vqs; - } - } - - if (vi->has_rss || vi->has_rss_hash_report) - virtnet_init_default_rss(vi); - - /* serialize netdev register + virtio_device_ready() with ndo_open() */ - rtnl_lock(); - - err = register_netdevice(dev); - if (err) { - pr_debug("virtio_net: registering device failed\n"); - rtnl_unlock(); - goto free_failover; - } - - virtio_device_ready(vdev); - - /* a random MAC address has been assigned, notify the device. - * We don't fail probe if VIRTIO_NET_F_CTRL_MAC_ADDR is not there - * because many devices work fine without getting MAC explicitly - */ - if (!virtio_has_feature(vdev, VIRTIO_NET_F_MAC) && - virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) { - if (virtnet_ctrl_set_mac_address(vi, dev->dev_addr, dev->addr_len)) { - rtnl_unlock(); - err = -EINVAL; - goto free_unregister_netdev; - } - } - - rtnl_unlock(); - - err = virtnet_cpu_notif_add(vi); - if (err) { - pr_debug("virtio_net: registering cpu notifier failed\n"); - goto free_unregister_netdev; - } - - virtnet_set_queues(vi, vi->curr_queue_pairs); - - /* Assume link up if device can't report link status, - otherwise get link status from config. */ - netif_carrier_off(dev); - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { - schedule_work(&vi->config_work); - } else { - vi->status = VIRTIO_NET_S_LINK_UP; - virtnet_update_settings(vi); - netif_carrier_on(dev); - } - - for (i = 0; i < ARRAY_SIZE(guest_offloads); i++) - if (virtio_has_feature(vi->vdev, guest_offloads[i])) - set_bit(guest_offloads[i], &vi->guest_offloads); - vi->guest_offloads_capable = vi->guest_offloads; - - pr_debug("virtnet: registered device %s with %d RX and TX vq's\n", - dev->name, max_queue_pairs); - - return 0; - -free_unregister_netdev: - unregister_netdev(dev); -free_failover: - net_failover_destroy(vi->failover); -free_vqs: - virtio_reset_device(vdev); - cancel_delayed_work_sync(&vi->refill); - free_receive_page_frags(vi); - virtnet_del_vqs(vi); -free: - free_netdev(dev); - return err; -} - void virtnet_free_bufs(struct virtnet_info *vi) { /* Free unused buffers in both send and recv, if any. */ @@ -3037,125 +2279,6 @@ void virtnet_free_bufs(struct virtnet_info *vi) free_receive_bufs(vi); } -static void remove_vq_common(struct virtnet_info *vi) -{ - virtio_reset_device(vi->vdev); - - virtnet_free_bufs(vi); - - free_receive_page_frags(vi); - - virtnet_del_vqs(vi); -} - -static void virtnet_remove(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - - virtnet_cpu_notif_remove(vi); - - /* Make sure no work handler is accessing the device. */ - flush_work(&vi->config_work); - - unregister_netdev(vi->dev); - - net_failover_destroy(vi->failover); - - remove_vq_common(vi); - - free_netdev(vi->dev); -} - -static __maybe_unused int virtnet_freeze(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - - virtnet_cpu_notif_remove(vi); - virtnet_freeze_down(vdev); - remove_vq_common(vi); - - return 0; -} - -static __maybe_unused int virtnet_restore(struct virtio_device *vdev) -{ - struct virtnet_info *vi = vdev->priv; - int err; - - err = virtnet_restore_up(vdev); - if (err) - return err; - virtnet_set_queues(vi, vi->curr_queue_pairs); - - err = virtnet_cpu_notif_add(vi); - if (err) { - virtnet_freeze_down(vdev); - remove_vq_common(vi); - return err; - } - - return 0; -} - -static struct virtio_device_id id_table[] = { - { VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID }, - { 0 }, -}; - -#define VIRTNET_FEATURES \ - VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, \ - VIRTIO_NET_F_MAC, \ - VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, \ - VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, \ - VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, \ - VIRTIO_NET_F_HOST_USO, VIRTIO_NET_F_GUEST_USO4, VIRTIO_NET_F_GUEST_USO6, \ - VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, \ - VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, \ - VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \ - VIRTIO_NET_F_CTRL_MAC_ADDR, \ - VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \ - VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \ - VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \ - VIRTIO_NET_F_GUEST_HDRLEN - -static unsigned int features[] = { - VIRTNET_FEATURES, -}; - -static unsigned int features_legacy[] = { - VIRTNET_FEATURES, - VIRTIO_NET_F_GSO, - VIRTIO_F_ANY_LAYOUT, -}; - -static struct virtio_driver virtio_net_driver = { - .feature_table = features, - .feature_table_size = ARRAY_SIZE(features), - .feature_table_legacy = features_legacy, - .feature_table_size_legacy = ARRAY_SIZE(features_legacy), - .driver.name = KBUILD_MODNAME, - .driver.owner = THIS_MODULE, - .id_table = id_table, - .validate = virtnet_validate, - .probe = virtnet_probe, - .remove = virtnet_remove, - .config_changed = virtnet_config_changed, -#ifdef CONFIG_PM_SLEEP - .freeze = virtnet_freeze, - .restore = virtnet_restore, -#endif -}; - -int virtnet_register_virtio_driver(void) -{ - return register_virtio_driver(&virtio_net_driver); -} - -void virtnet_unregister_virtio_driver(void) -{ - unregister_virtio_driver(&virtio_net_driver); -} - static __init int virtio_net_driver_init(void) { int ret; @@ -3181,6 +2304,5 @@ static __exit void virtio_net_driver_exit(void) } module_exit(virtio_net_driver_exit); -MODULE_DEVICE_TABLE(virtio, id_table); MODULE_DESCRIPTION("Virtio network driver"); MODULE_LICENSE("GPL"); diff --git a/drivers/net/virtio/virtnet.h b/drivers/net/virtio/virtnet.h index 5f20e9103a0e..782654d60357 100644 --- a/drivers/net/virtio/virtnet.h +++ b/drivers/net/virtio/virtnet.h @@ -193,6 +193,8 @@ void virtnet_skb_xmit_done(struct virtqueue *vq); void virtnet_skb_recv_done(struct virtqueue *rvq); void virtnet_refill_work(struct work_struct *work); void virtnet_free_bufs(struct virtnet_info *vi); +int virtnet_poll(struct napi_struct *napi, int budget); +int virtnet_poll_tx(struct napi_struct *napi, int budget); static inline void virtnet_enable_delayed_refill(struct virtnet_info *vi) { diff --git a/drivers/net/virtio/virtnet_virtio.c b/drivers/net/virtio/virtnet_virtio.c new file mode 100644 index 000000000000..31a19dacb3a7 --- /dev/null +++ b/drivers/net/virtio/virtnet_virtio.c @@ -0,0 +1,880 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include <linux/virtio_net.h> +#include <linux/filter.h> +#include <net/route.h> +#include <net/net_failover.h> + +#include "virtnet.h" +#include "virtnet_common.h" +#include "virtnet_ctrl.h" +#include "virtnet_ethtool.h" +#include "virtnet_virtio.h" + +static int napi_weight = NAPI_POLL_WEIGHT; +module_param(napi_weight, int, 0444); + +static bool csum = true, gso = true, napi_tx = true; +module_param(csum, bool, 0444); +module_param(gso, bool, 0444); +module_param(napi_tx, bool, 0644); + +static const unsigned long guest_offloads[] = { + VIRTIO_NET_F_GUEST_TSO4, + VIRTIO_NET_F_GUEST_TSO6, + VIRTIO_NET_F_GUEST_ECN, + VIRTIO_NET_F_GUEST_UFO, + VIRTIO_NET_F_GUEST_CSUM, + VIRTIO_NET_F_GUEST_USO4, + VIRTIO_NET_F_GUEST_USO6, + VIRTIO_NET_F_GUEST_HDRLEN +}; + +static int txq2vq(int txq) +{ + return txq * 2 + 1; +} + +static int rxq2vq(int rxq) +{ + return rxq * 2; +} + +static void virtnet_init_default_rss(struct virtnet_info *vi) +{ + u32 indir_val = 0; + int i = 0; + + vi->ctrl->rss.hash_types = vi->rss_hash_types_supported; + vi->rss_hash_types_saved = vi->rss_hash_types_supported; + vi->ctrl->rss.indirection_table_mask = vi->rss_indir_table_size + ? vi->rss_indir_table_size - 1 : 0; + vi->ctrl->rss.unclassified_queue = 0; + + for (; i < vi->rss_indir_table_size; ++i) { + indir_val = ethtool_rxfh_indir_default(i, vi->curr_queue_pairs); + vi->ctrl->rss.indirection_table[i] = indir_val; + } + + vi->ctrl->rss.max_tx_vq = vi->curr_queue_pairs; + vi->ctrl->rss.hash_key_length = vi->rss_key_size; + + netdev_rss_key_fill(vi->ctrl->rss.key, vi->rss_key_size); +} + +static void virtnet_init_settings(struct net_device *dev) +{ + struct virtnet_info *vi = netdev_priv(dev); + + vi->speed = SPEED_UNKNOWN; + vi->duplex = DUPLEX_UNKNOWN; +} + +static void virtnet_update_settings(struct virtnet_info *vi) +{ + u32 speed; + u8 duplex; + + if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_SPEED_DUPLEX)) + return; + + virtio_cread_le(vi->vdev, struct virtio_net_config, speed, &speed); + + if (ethtool_validate_speed(speed)) + vi->speed = speed; + + virtio_cread_le(vi->vdev, struct virtio_net_config, duplex, &duplex); + + if (ethtool_validate_duplex(duplex)) + vi->duplex = duplex; +} + +static void virtnet_freeze_down(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + + /* Make sure no work handler is accessing the device */ + flush_work(&vi->config_work); + + netif_tx_lock_bh(vi->dev); + netif_device_detach(vi->dev); + netif_tx_unlock_bh(vi->dev); + if (netif_running(vi->dev)) + virtnet_get_netdev()->ndo_stop(vi->dev); +} + +static int init_vqs(struct virtnet_info *vi); + +static int virtnet_restore_up(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + int err; + + err = init_vqs(vi); + if (err) + return err; + + virtio_device_ready(vdev); + + virtnet_enable_delayed_refill(vi); + + if (netif_running(vi->dev)) { + err = virtnet_get_netdev()->ndo_open(vi->dev); + if (err) + return err; + } + + netif_tx_lock_bh(vi->dev); + netif_device_attach(vi->dev); + netif_tx_unlock_bh(vi->dev); + return err; +} + +static void virtnet_config_changed_work(struct work_struct *work) +{ + struct virtnet_info *vi + container_of(work, struct virtnet_info, config_work); + u16 v; + + if (virtio_cread_feature(vi->vdev, VIRTIO_NET_F_STATUS, + struct virtio_net_config, status, &v) < 0) + return; + + if (v & VIRTIO_NET_S_ANNOUNCE) { + netdev_notify_peers(vi->dev); + + rtnl_lock(); + virtnet_ack_link_announce(vi); + rtnl_unlock(); + } + + /* Ignore unknown (future) status bits */ + v &= VIRTIO_NET_S_LINK_UP; + + if (vi->status == v) + return; + + vi->status = v; + + if (vi->status & VIRTIO_NET_S_LINK_UP) { + virtnet_update_settings(vi); + netif_carrier_on(vi->dev); + netif_tx_wake_all_queues(vi->dev); + } else { + netif_carrier_off(vi->dev); + netif_tx_stop_all_queues(vi->dev); + } +} + +static void virtnet_config_changed(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + + schedule_work(&vi->config_work); +} + +static void virtnet_free_queues(struct virtnet_info *vi) +{ + int i; + + for (i = 0; i < vi->max_queue_pairs; i++) { + __netif_napi_del(&vi->rq[i].napi); + __netif_napi_del(&vi->sq[i].napi); + } + + /* We called __netif_napi_del(), + * we need to respect an RCU grace period before freeing vi->rq + */ + synchronize_net(); + + kfree(vi->rq); + kfree(vi->sq); + kfree(vi->ctrl); +} + +static void free_receive_page_frags(struct virtnet_info *vi) +{ + int i; + + for (i = 0; i < vi->max_queue_pairs; i++) + if (vi->rq[i].alloc_frag.page) + put_page(vi->rq[i].alloc_frag.page); +} + +static void virtnet_del_vqs(struct virtnet_info *vi) +{ + struct virtio_device *vdev = vi->vdev; + + virtnet_clean_affinity(vi); + + vdev->config->del_vqs(vdev); + + virtnet_free_queues(vi); +} + +/* How large should a single buffer be so a queue full of these can fit at + * least one full packet? + * Logic below assumes the mergeable buffer header is used. + */ +static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqueue *vq) +{ + const unsigned int hdr_len = vi->hdr_len; + unsigned int rq_size = virtqueue_get_vring_size(vq); + unsigned int packet_len = vi->big_packets ? IP_MAX_MTU : vi->dev->max_mtu; + unsigned int buf_len = hdr_len + ETH_HLEN + VLAN_HLEN + packet_len; + unsigned int min_buf_len = DIV_ROUND_UP(buf_len, rq_size); + + return max(max(min_buf_len, hdr_len) - hdr_len, + (unsigned int)VIRTNET_GOOD_PACKET_LEN); +} + +static int virtnet_find_vqs(struct virtnet_info *vi) +{ + vq_callback_t **callbacks; + struct virtqueue **vqs; + int ret = -ENOMEM; + int i, total_vqs; + const char **names; + bool *ctx; + + /* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by + * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by + * possible control vq. + */ + total_vqs = vi->max_queue_pairs * 2 + + virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ); + + /* Allocate space for find_vqs parameters */ + vqs = kcalloc(total_vqs, sizeof(*vqs), GFP_KERNEL); + if (!vqs) + goto err_vq; + callbacks = kmalloc_array(total_vqs, sizeof(*callbacks), GFP_KERNEL); + if (!callbacks) + goto err_callback; + names = kmalloc_array(total_vqs, sizeof(*names), GFP_KERNEL); + if (!names) + goto err_names; + if (!vi->big_packets || vi->mergeable_rx_bufs) { + ctx = kcalloc(total_vqs, sizeof(*ctx), GFP_KERNEL); + if (!ctx) + goto err_ctx; + } else { + ctx = NULL; + } + + /* Parameters for control virtqueue, if any */ + if (vi->has_cvq) { + callbacks[total_vqs - 1] = NULL; + names[total_vqs - 1] = "control"; + } + + /* Allocate/initialize parameters for send/receive virtqueues */ + for (i = 0; i < vi->max_queue_pairs; i++) { + callbacks[rxq2vq(i)] = virtnet_skb_recv_done; + callbacks[txq2vq(i)] = virtnet_skb_xmit_done; + sprintf(vi->rq[i].name, "input.%d", i); + sprintf(vi->sq[i].name, "output.%d", i); + names[rxq2vq(i)] = vi->rq[i].name; + names[txq2vq(i)] = vi->sq[i].name; + if (ctx) + ctx[rxq2vq(i)] = true; + } + + ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks, + names, ctx, NULL); + if (ret) + goto err_find; + + if (vi->has_cvq) { + vi->cvq = vqs[total_vqs - 1]; + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VLAN)) + vi->dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; + } + + for (i = 0; i < vi->max_queue_pairs; i++) { + vi->rq[i].vq = vqs[rxq2vq(i)]; + vi->rq[i].min_buf_len = mergeable_min_buf_len(vi, vi->rq[i].vq); + vi->sq[i].vq = vqs[txq2vq(i)]; + } + + /* run here: ret == 0. */ + +err_find: + kfree(ctx); +err_ctx: + kfree(names); +err_names: + kfree(callbacks); +err_callback: + kfree(vqs); +err_vq: + return ret; +} + +static int virtnet_alloc_queues(struct virtnet_info *vi) +{ + int i; + + if (vi->has_cvq) { + vi->ctrl = kzalloc(sizeof(*vi->ctrl), GFP_KERNEL); + if (!vi->ctrl) + goto err_ctrl; + } else { + vi->ctrl = NULL; + } + vi->sq = kcalloc(vi->max_queue_pairs, sizeof(*vi->sq), GFP_KERNEL); + if (!vi->sq) + goto err_sq; + vi->rq = kcalloc(vi->max_queue_pairs, sizeof(*vi->rq), GFP_KERNEL); + if (!vi->rq) + goto err_rq; + + INIT_DELAYED_WORK(&vi->refill, virtnet_refill_work); + for (i = 0; i < vi->max_queue_pairs; i++) { + vi->rq[i].pages = NULL; + netif_napi_add_weight(vi->dev, &vi->rq[i].napi, virtnet_poll, + napi_weight); + netif_napi_add_tx_weight(vi->dev, &vi->sq[i].napi, + virtnet_poll_tx, + napi_tx ? napi_weight : 0); + + sg_init_table(vi->rq[i].sg, ARRAY_SIZE(vi->rq[i].sg)); + ewma_pkt_len_init(&vi->rq[i].mrg_avg_pkt_len); + sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg)); + + u64_stats_init(&vi->rq[i].stats.syncp); + u64_stats_init(&vi->sq[i].stats.syncp); + } + + return 0; + +err_rq: + kfree(vi->sq); +err_sq: + kfree(vi->ctrl); +err_ctrl: + return -ENOMEM; +} + +static int init_vqs(struct virtnet_info *vi) +{ + int ret; + + /* Allocate send & receive queues */ + ret = virtnet_alloc_queues(vi); + if (ret) + goto err; + + ret = virtnet_find_vqs(vi); + if (ret) + goto err_free; + + cpus_read_lock(); + virtnet_set_affinity(vi); + cpus_read_unlock(); + + return 0; + +err_free: + virtnet_free_queues(vi); +err: + return ret; +} + +static bool virtnet_fail_on_feature(struct virtio_device *vdev, + unsigned int fbit, + const char *fname, const char *dname) +{ + if (!virtio_has_feature(vdev, fbit)) + return false; + + dev_err(&vdev->dev, "device advertises feature %s but not %s", + fname, dname); + + return true; +} + +#define VIRTNET_FAIL_ON(vdev, fbit, dbit) \ + virtnet_fail_on_feature(vdev, fbit, #fbit, dbit) + +static bool virtnet_validate_features(struct virtio_device *vdev) +{ + if (!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) && + (VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_RX, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_VLAN, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_RSS, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_HASH_REPORT, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_NOTF_COAL, + "VIRTIO_NET_F_CTRL_VQ"))) { + return false; + } + + return true; +} + +#define MIN_MTU ETH_MIN_MTU +#define MAX_MTU ETH_MAX_MTU + +static int virtnet_validate(struct virtio_device *vdev) +{ + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config access disabled\n", + __func__); + return -EINVAL; + } + + if (!virtnet_validate_features(vdev)) + return -EINVAL; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) { + int mtu = virtio_cread16(vdev, + offsetof(struct virtio_net_config, + mtu)); + if (mtu < MIN_MTU) + __virtio_clear_bit(vdev, VIRTIO_NET_F_MTU); + } + + if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY) && + !virtio_has_feature(vdev, VIRTIO_NET_F_MAC)) { + dev_warn(&vdev->dev, "device advertises feature VIRTIO_NET_F_STANDBY but not VIRTIO_NET_F_MAC, disabling standby"); + __virtio_clear_bit(vdev, VIRTIO_NET_F_STANDBY); + } + + return 0; +} + +static bool virtnet_check_guest_gso(const struct virtnet_info *vi) +{ + return virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) || + (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_USO4) && + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_USO6)); +} + +static void virtnet_set_big_packets(struct virtnet_info *vi, const int mtu) +{ + bool guest_gso = virtnet_check_guest_gso(vi); + + /* If device can receive ANY guest GSO packets, regardless of mtu, + * allocate packets of maximum size, otherwise limit it to only + * mtu size worth only. + */ + if (mtu > ETH_DATA_LEN || guest_gso) { + vi->big_packets = true; + vi->big_packets_num_skbfrags = guest_gso ? MAX_SKB_FRAGS : DIV_ROUND_UP(mtu, PAGE_SIZE); + } +} + +static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) +{ + int err; + + rtnl_lock(); + err = _virtnet_set_queues(vi, queue_pairs); + rtnl_unlock(); + return err; +} + +static int virtnet_probe(struct virtio_device *vdev) +{ + int i, err = -ENOMEM; + struct net_device *dev; + struct virtnet_info *vi; + u16 max_queue_pairs; + int mtu = 0; + + /* Find if host supports multiqueue/rss virtio_net device */ + max_queue_pairs = 1; + if (virtio_has_feature(vdev, VIRTIO_NET_F_MQ) || virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) + max_queue_pairs + virtio_cread16(vdev, offsetof(struct virtio_net_config, max_virtqueue_pairs)); + + /* We need at least 2 queue's */ + if (max_queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN || + max_queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX || + !virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) + max_queue_pairs = 1; + + /* Allocate ourselves a network device with room for our info */ + dev = alloc_etherdev_mq(sizeof(struct virtnet_info), max_queue_pairs); + if (!dev) + return -ENOMEM; + + /* Set up network device as normal. */ + dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE | + IFF_TX_SKB_NO_LINEAR; + dev->netdev_ops = virtnet_get_netdev(); + dev->features = NETIF_F_HIGHDMA; + + dev->ethtool_ops = virtnet_get_ethtool_ops(); + SET_NETDEV_DEV(dev, &vdev->dev); + + /* Do we support "hardware" checksums? */ + if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) { + /* This opens up the world of extra features. */ + dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG; + if (csum) + dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) { + dev->hw_features |= NETIF_F_TSO + | NETIF_F_TSO_ECN | NETIF_F_TSO6; + } + /* Individual feature bits: what can host handle? */ + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4)) + dev->hw_features |= NETIF_F_TSO; + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO6)) + dev->hw_features |= NETIF_F_TSO6; + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN)) + dev->hw_features |= NETIF_F_TSO_ECN; + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_USO)) + dev->hw_features |= NETIF_F_GSO_UDP_L4; + + dev->features |= NETIF_F_GSO_ROBUST; + + if (gso) + dev->features |= dev->hw_features & NETIF_F_ALL_TSO; + /* (!csum && gso) case will be fixed by register_netdev() */ + } + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) + dev->features |= NETIF_F_RXCSUM; + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || + virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) + dev->features |= NETIF_F_GRO_HW; + if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS)) + dev->hw_features |= NETIF_F_GRO_HW; + + dev->vlan_features = dev->features; + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + + /* MTU range: 68 - 65535 */ + dev->min_mtu = MIN_MTU; + dev->max_mtu = MAX_MTU; + + /* Configuration may specify what MAC to use. Otherwise random. */ + if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC)) { + u8 addr[ETH_ALEN]; + + virtio_cread_bytes(vdev, + offsetof(struct virtio_net_config, mac), + addr, ETH_ALEN); + eth_hw_addr_set(dev, addr); + } else { + eth_hw_addr_random(dev); + dev_info(&vdev->dev, "Assigned random MAC address %pM\n", + dev->dev_addr); + } + + /* Set up our device-specific information */ + vi = netdev_priv(dev); + vi->dev = dev; + vi->vdev = vdev; + vdev->priv = vi; + + INIT_WORK(&vi->config_work, virtnet_config_changed_work); + spin_lock_init(&vi->refill_lock); + + if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + vi->mergeable_rx_bufs = true; + dev->xdp_features |= NETDEV_XDP_ACT_RX_SG; + } + + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { + vi->rx_usecs = 0; + vi->tx_usecs = 0; + vi->tx_max_packets = 0; + vi->rx_max_packets = 0; + } + + if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) + vi->has_rss_hash_report = true; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) + vi->has_rss = true; + + if (vi->has_rss || vi->has_rss_hash_report) { + vi->rss_indir_table_size + virtio_cread16(vdev, offsetof(struct virtio_net_config, + rss_max_indirection_table_length)); + vi->rss_key_size + virtio_cread8(vdev, offsetof(struct virtio_net_config, rss_max_key_size)); + + vi->rss_hash_types_supported + virtio_cread32(vdev, offsetof(struct virtio_net_config, supported_hash_types)); + vi->rss_hash_types_supported &+ ~(VIRTIO_NET_RSS_HASH_TYPE_IP_EX | + VIRTIO_NET_RSS_HASH_TYPE_TCP_EX | + VIRTIO_NET_RSS_HASH_TYPE_UDP_EX); + + dev->hw_features |= NETIF_F_RXHASH; + } + + if (vi->has_rss_hash_report) + vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash); + else if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) || + virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) + vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf); + else + vi->hdr_len = sizeof(struct virtio_net_hdr); + + if (virtio_has_feature(vdev, VIRTIO_F_ANY_LAYOUT) || + virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) + vi->any_header_sg = true; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) + vi->has_cvq = true; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) { + mtu = virtio_cread16(vdev, + offsetof(struct virtio_net_config, + mtu)); + if (mtu < dev->min_mtu) { + /* Should never trigger: MTU was previously validated + * in virtnet_validate. + */ + dev_err(&vdev->dev, + "device MTU appears to have changed it is now %d < %d", + mtu, dev->min_mtu); + err = -EINVAL; + goto free; + } + + dev->mtu = mtu; + dev->max_mtu = mtu; + } + + virtnet_set_big_packets(vi, mtu); + + if (vi->any_header_sg) + dev->needed_headroom = vi->hdr_len; + + /* Enable multiqueue by default */ + if (num_online_cpus() >= max_queue_pairs) + vi->curr_queue_pairs = max_queue_pairs; + else + vi->curr_queue_pairs = num_online_cpus(); + vi->max_queue_pairs = max_queue_pairs; + + /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ + err = init_vqs(vi); + if (err) + goto free; + + virtnet_dev_rx_queue_group(vi, dev); + netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs); + netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs); + + virtnet_init_settings(dev); + + if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY)) { + vi->failover = net_failover_create(vi->dev); + if (IS_ERR(vi->failover)) { + err = PTR_ERR(vi->failover); + goto free_vqs; + } + } + + if (vi->has_rss || vi->has_rss_hash_report) + virtnet_init_default_rss(vi); + + /* serialize netdev register + virtio_device_ready() with ndo_open() */ + rtnl_lock(); + + err = register_netdevice(dev); + if (err) { + pr_debug("virtio_net: registering device failed\n"); + rtnl_unlock(); + goto free_failover; + } + + virtio_device_ready(vdev); + + /* a random MAC address has been assigned, notify the device. + * We don't fail probe if VIRTIO_NET_F_CTRL_MAC_ADDR is not there + * because many devices work fine without getting MAC explicitly + */ + if (!virtio_has_feature(vdev, VIRTIO_NET_F_MAC) && + virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) { + if (virtnet_ctrl_set_mac_address(vi, dev->dev_addr, dev->addr_len)) { + rtnl_unlock(); + err = -EINVAL; + goto free_unregister_netdev; + } + } + + rtnl_unlock(); + + err = virtnet_cpu_notif_add(vi); + if (err) { + pr_debug("virtio_net: registering cpu notifier failed\n"); + goto free_unregister_netdev; + } + + virtnet_set_queues(vi, vi->curr_queue_pairs); + + /* Assume link up if device can't report link status, + otherwise get link status from config. */ + netif_carrier_off(dev); + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { + schedule_work(&vi->config_work); + } else { + vi->status = VIRTIO_NET_S_LINK_UP; + virtnet_update_settings(vi); + netif_carrier_on(dev); + } + + for (i = 0; i < ARRAY_SIZE(guest_offloads); i++) + if (virtio_has_feature(vi->vdev, guest_offloads[i])) + set_bit(guest_offloads[i], &vi->guest_offloads); + vi->guest_offloads_capable = vi->guest_offloads; + + pr_debug("virtnet: registered device %s with %d RX and TX vq's\n", + dev->name, max_queue_pairs); + + return 0; + +free_unregister_netdev: + unregister_netdev(dev); +free_failover: + net_failover_destroy(vi->failover); +free_vqs: + virtio_reset_device(vdev); + cancel_delayed_work_sync(&vi->refill); + free_receive_page_frags(vi); + virtnet_del_vqs(vi); +free: + free_netdev(dev); + return err; +} + +static void remove_vq_common(struct virtnet_info *vi) +{ + virtio_reset_device(vi->vdev); + + virtnet_free_bufs(vi); + + free_receive_page_frags(vi); + + virtnet_del_vqs(vi); +} + +static void virtnet_remove(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + + virtnet_cpu_notif_remove(vi); + + /* Make sure no work handler is accessing the device. */ + flush_work(&vi->config_work); + + unregister_netdev(vi->dev); + + net_failover_destroy(vi->failover); + + remove_vq_common(vi); + + free_netdev(vi->dev); +} + +static __maybe_unused int virtnet_freeze(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + + virtnet_cpu_notif_remove(vi); + virtnet_freeze_down(vdev); + remove_vq_common(vi); + + return 0; +} + +static __maybe_unused int virtnet_restore(struct virtio_device *vdev) +{ + struct virtnet_info *vi = vdev->priv; + int err; + + err = virtnet_restore_up(vdev); + if (err) + return err; + virtnet_set_queues(vi, vi->curr_queue_pairs); + + err = virtnet_cpu_notif_add(vi); + if (err) { + virtnet_freeze_down(vdev); + remove_vq_common(vi); + return err; + } + + return 0; +} + +static struct virtio_device_id id_table[] = { + { VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +#define VIRTNET_FEATURES \ + VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, \ + VIRTIO_NET_F_MAC, \ + VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, \ + VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, \ + VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, \ + VIRTIO_NET_F_HOST_USO, VIRTIO_NET_F_GUEST_USO4, VIRTIO_NET_F_GUEST_USO6, \ + VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, \ + VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, \ + VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \ + VIRTIO_NET_F_CTRL_MAC_ADDR, \ + VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \ + VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \ + VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \ + VIRTIO_NET_F_GUEST_HDRLEN + +static unsigned int features[] = { + VIRTNET_FEATURES, +}; + +static unsigned int features_legacy[] = { + VIRTNET_FEATURES, + VIRTIO_NET_F_GSO, + VIRTIO_F_ANY_LAYOUT, +}; + +static struct virtio_driver virtio_net_driver = { + .feature_table = features, + .feature_table_size = ARRAY_SIZE(features), + .feature_table_legacy = features_legacy, + .feature_table_size_legacy = ARRAY_SIZE(features_legacy), + .driver.name = KBUILD_MODNAME, + .driver.owner = THIS_MODULE, + .id_table = id_table, + .validate = virtnet_validate, + .probe = virtnet_probe, + .remove = virtnet_remove, + .config_changed = virtnet_config_changed, +#ifdef CONFIG_PM_SLEEP + .freeze = virtnet_freeze, + .restore = virtnet_restore, +#endif +}; + +int virtnet_register_virtio_driver(void) +{ + return register_virtio_driver(&virtio_net_driver); +} + +void virtnet_unregister_virtio_driver(void) +{ + unregister_virtio_driver(&virtio_net_driver); +} + +MODULE_DEVICE_TABLE(virtio, id_table); diff --git a/drivers/net/virtio/virtnet_virtio.h b/drivers/net/virtio/virtnet_virtio.h new file mode 100644 index 000000000000..15be2fdf2cd1 --- /dev/null +++ b/drivers/net/virtio/virtnet_virtio.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __VIRTNET_VIRTIO_H__ +#define __VIRTNET_VIRTIO_H__ + +int virtnet_register_virtio_driver(void); +void virtnet_unregister_virtio_driver(void); +#endif -- 2.32.0.3.g01195cf9f
On Tue, Mar 28, 2023 at 05:28:31PM +0800, Xuan Zhuo wrote:> Considering the complexity of virtio-net.c and the new features we want > to add, it is time to split virtio-net.c into multiple independent > module files. > > This is beneficial to the maintenance and adding new functions. > > And AF_XDP support will be added later, then a separate xsk.c file will > be added. > > This patchset split virtio-net.c into these parts: > > * virtnet.c: virtio net device ops (napi, tx, rx, device ops, ...) > * virtnet_common.c: virtio net common code > * virtnet_ethtool.c: virtio net ethtool callbacks > * virtnet_ctrl.c: virtio net ctrl queue command APIs > * virtnet_virtio.c: virtio net virtio callbacks/ops (driver register, virtio probe, virtio free, ...) > > Please review. > > Thanks.I don't feel this is an improvement as presented, will need more work to make code placement more logical. For example where do I find code to update rq stats? Rx data path should be virtnet.c? No it's in virtnet_ethtool.c because rq stats can be accessed by ethtool. A bunch of stuff seems to be in headers just because of technicalities. virtio common seems to be a dumping ground with no guiding principle at all. drivers/net/virtio/virtnet_virtio.c is weird with virt repeated three times in the path. These things only get murkier with time, at the point of reorg I would expect very logical placement, since without clear guiding rule finding where something is becomes harder but more importantly we'll now get endless heartburn about where does each new function go. The reorg is unfortunately not free - for example git log --follow will no longer easily match virtio because --follow works with exactly one path. It's now also extra work to keep headers self-consistent. So it better be a big improvement to be worth it.> Xuan Zhuo (16): > virtio_net: add a separate directory for virtio-net > virtio_net: move struct to header file > virtio_net: add prefix to the struct inside header file > virtio_net: separating cpu-related funs > virtio_net: separate virtnet_ctrl_set_queues() > virtio_net: separate virtnet_ctrl_set_mac_address() > virtio_net: remove lock from virtnet_ack_link_announce() > virtio_net: separating the APIs of cq > virtio_net: introduce virtnet_rq_update_stats() > virtio_net: separating the funcs of ethtool > virtio_net: introduce virtnet_dev_rx_queue_group() > virtio_net: introduce virtnet_get_netdev() > virtio_net: prepare for virtio > virtio_net: move virtnet_[en/dis]able_delayed_refill to header file > virtio_net: add APIs to register/unregister virtio driver > virtio_net: separating the virtio code > > MAINTAINERS | 2 +- > drivers/net/Kconfig | 8 +- > drivers/net/Makefile | 2 +- > drivers/net/virtio/Kconfig | 11 + > drivers/net/virtio/Makefile | 10 + > .../net/{virtio_net.c => virtio/virtnet.c} | 2368 ++--------------- > drivers/net/virtio/virtnet.h | 213 ++ > drivers/net/virtio/virtnet_common.c | 138 + > drivers/net/virtio/virtnet_common.h | 14 + > drivers/net/virtio/virtnet_ctrl.c | 272 ++ > drivers/net/virtio/virtnet_ctrl.h | 45 + > drivers/net/virtio/virtnet_ethtool.c | 578 ++++ > drivers/net/virtio/virtnet_ethtool.h | 8 + > drivers/net/virtio/virtnet_virtio.c | 880 ++++++ > drivers/net/virtio/virtnet_virtio.h | 8 + > 15 files changed, 2366 insertions(+), 2191 deletions(-) > create mode 100644 drivers/net/virtio/Kconfig > create mode 100644 drivers/net/virtio/Makefile > rename drivers/net/{virtio_net.c => virtio/virtnet.c} (50%) > create mode 100644 drivers/net/virtio/virtnet.h > create mode 100644 drivers/net/virtio/virtnet_common.c > create mode 100644 drivers/net/virtio/virtnet_common.h > create mode 100644 drivers/net/virtio/virtnet_ctrl.c > create mode 100644 drivers/net/virtio/virtnet_ctrl.h > create mode 100644 drivers/net/virtio/virtnet_ethtool.c > create mode 100644 drivers/net/virtio/virtnet_ethtool.h > create mode 100644 drivers/net/virtio/virtnet_virtio.c > create mode 100644 drivers/net/virtio/virtnet_virtio.h > > -- > 2.32.0.3.g01195cf9f
Apparently Analagous Threads
- [PATCH 00/16] virtio-net: split virtio-net.c
- [PATCH 00/16] virtio-net: split virtio-net.c
- [PATCH 00/16] virtio-net: split virtio-net.c
- [PATCH net v3] virtio_net: Fix error unwinding of XDP initialization
- [PATCH net v3] virtio_net: Fix error unwinding of XDP initialization