Wei Liu
2013-May-27 11:29 UTC
[PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
* This is a xen-devel only post, since we have not reached concesus on what to add / remove in this new model. This series tries to be conservative about adding in new feature compared to V1. This series implements NAPI + kthread 1:1 model for Xen netback. This model - provides better scheduling fairness among vifs - is prerequisite for implementing multiqueue for Xen network driver The first two patches are ground work for the third patch. First one simplifies code in netback, second one can reduce memory footprint if we switch to 1:1 model. The third patch has the real meat: - make use of NAPI to mitigate interrupt - kthreads are not bound to CPUs any more, so that we can take advantage of backend scheduler and trust it to do the right thing Change since V1: - No page pool in this version. Instead page tracking facility is removed. Wei Liu (3): xen-netback: remove page tracking facility xen-netback: switch to per-cpu scratch space xen-netback: switch to NAPI + kthread 1:1 model drivers/net/xen-netback/common.h | 92 ++-- drivers/net/xen-netback/interface.c | 122 +++-- drivers/net/xen-netback/netback.c | 959 +++++++++++++++-------------------- 3 files changed, 537 insertions(+), 636 deletions(-) -- 1.7.10.4
The data flow from DomU to DomU on the same host: With tracking facility: copy DomU --------> Dom0 DomU | ^ |____________________________| copy In other words, we can always copy page from Dom0, thus removing the need for a tracking facility. copy copy DomU --------> Dom0 -------> DomU Simple iperf test shows no performance regression (obviously we do two copy''s anyway): W/ tracking: ~5.3Gb/s W/o tracking: ~5.4Gb/s Signed-off-by: Wei Liu <wei.liu2@citrix.com> --- drivers/net/xen-netback/netback.c | 77 +------------------------------------ 1 file changed, 2 insertions(+), 75 deletions(-) diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 82576ff..54853be 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -95,21 +95,6 @@ struct netbk_rx_meta { #define MAX_BUFFER_OFFSET PAGE_SIZE -/* extra field used in struct page */ -union page_ext { - struct { -#if BITS_PER_LONG < 64 -#define IDX_WIDTH 8 -#define GROUP_WIDTH (BITS_PER_LONG - IDX_WIDTH) - unsigned int group:GROUP_WIDTH; - unsigned int idx:IDX_WIDTH; -#else - unsigned int group, idx; -#endif - } e; - void *mapping; -}; - struct xen_netbk { wait_queue_head_t wq; struct task_struct *task; @@ -214,45 +199,6 @@ static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk, return (unsigned long)pfn_to_kaddr(idx_to_pfn(netbk, idx)); } -/* extra field used in struct page */ -static inline void set_page_ext(struct page *pg, struct xen_netbk *netbk, - unsigned int idx) -{ - unsigned int group = netbk - xen_netbk; - union page_ext ext = { .e = { .group = group + 1, .idx = idx } }; - - BUILD_BUG_ON(sizeof(ext) > sizeof(ext.mapping)); - pg->mapping = ext.mapping; -} - -static int get_page_ext(struct page *pg, - unsigned int *pgroup, unsigned int *pidx) -{ - union page_ext ext = { .mapping = pg->mapping }; - struct xen_netbk *netbk; - unsigned int group, idx; - - group = ext.e.group - 1; - - if (group < 0 || group >= xen_netbk_group_nr) - return 0; - - netbk = &xen_netbk[group]; - - idx = ext.e.idx; - - if ((idx < 0) || (idx >= MAX_PENDING_REQS)) - return 0; - - if (netbk->mmap_pages[idx] != pg) - return 0; - - *pgroup = group; - *pidx = idx; - - return 1; -} - /* * This is the amount of packet we copy rather than map, so that the * guest can''t fiddle with the contents of the headers while we do @@ -453,12 +399,6 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, { struct gnttab_copy *copy_gop; struct netbk_rx_meta *meta; - /* - * These variables are used iff get_page_ext returns true, - * in which case they are guaranteed to be initialized. - */ - unsigned int uninitialized_var(group), uninitialized_var(idx); - int foreign = get_page_ext(page, &group, &idx); unsigned long bytes; /* Data must not cross a page boundary. */ @@ -494,20 +434,9 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, copy_gop = npo->copy + npo->copy_prod++; copy_gop->flags = GNTCOPY_dest_gref; - if (foreign) { - struct xen_netbk *netbk = &xen_netbk[group]; - struct pending_tx_info *src_pend; + copy_gop->source.domid = DOMID_SELF; + copy_gop->source.u.gmfn = virt_to_mfn(page_address(page)); - src_pend = &netbk->pending_tx_info[idx]; - - copy_gop->source.domid = src_pend->vif->domid; - copy_gop->source.u.ref = src_pend->req.gref; - copy_gop->flags |= GNTCOPY_source_gref; - } else { - void *vaddr = page_address(page); - copy_gop->source.domid = DOMID_SELF; - copy_gop->source.u.gmfn = virt_to_mfn(vaddr); - } copy_gop->source.offset = offset; copy_gop->dest.domid = vif->domid; @@ -1045,7 +974,6 @@ static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk, page = alloc_page(GFP_KERNEL|__GFP_COLD); if (!page) return NULL; - set_page_ext(page, netbk, pending_idx); netbk->mmap_pages[pending_idx] = page; return page; } @@ -1153,7 +1081,6 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, first->req.offset = 0; first->req.size = dst_offset; first->head = start_idx; - set_page_ext(page, netbk, head_idx); netbk->mmap_pages[head_idx] = page; frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx); } -- 1.7.10.4
There are maximum nr_onlie_cpus netback threads running. We can make use of per-cpu scratch space to reduce the size of buffer space when we move to 1:1 model. In the unlikely event when per-cpu scratch space is not available, processing routines will refuse to run on that CPU. Signed-off-by: Wei Liu <wei.liu2@citrix.com> --- drivers/net/xen-netback/netback.c | 247 ++++++++++++++++++++++++++++++------- 1 file changed, 204 insertions(+), 43 deletions(-) diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 54853be..0f69eda 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -37,6 +37,7 @@ #include <linux/kthread.h> #include <linux/if_vlan.h> #include <linux/udp.h> +#include <linux/cpu.h> #include <net/tcp.h> @@ -95,6 +96,24 @@ struct netbk_rx_meta { #define MAX_BUFFER_OFFSET PAGE_SIZE +/* Coalescing tx requests before copying makes number of grant + * copy ops greater or equal to number of slots required. In + * worst case a tx request consumes 2 gnttab_copy. So the size + * of tx_copy_ops array should be 2*MAX_PENDING_REQS. + */ +#define TX_COPY_OPS_SIZE (2*MAX_PENDING_REQS) +DEFINE_PER_CPU(struct gnttab_copy *, tx_copy_ops); + +/* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each + * head/fragment page uses 2 copy operations because it + * straddles two buffers in the frontend. So the size of following + * arrays should be 2*XEN_NETIF_RX_RING_SIZE. + */ +#define GRANT_COPY_OP_SIZE (2*XEN_NETIF_RX_RING_SIZE) +#define META_SIZE (2*XEN_NETIF_RX_RING_SIZE) +DEFINE_PER_CPU(struct gnttab_copy *, grant_copy_op); +DEFINE_PER_CPU(struct netbk_rx_meta *, meta); + struct xen_netbk { wait_queue_head_t wq; struct task_struct *task; @@ -116,21 +135,7 @@ struct xen_netbk { atomic_t netfront_count; struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; - /* Coalescing tx requests before copying makes number of grant - * copy ops greater or equal to number of slots required. In - * worst case a tx request consumes 2 gnttab_copy. - */ - struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS]; - u16 pending_ring[MAX_PENDING_REQS]; - - /* - * Given MAX_BUFFER_OFFSET of 4096 the worst case is that each - * head/fragment page uses 2 copy operations because it - * straddles two buffers in the frontend. - */ - struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE]; - struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE]; }; static struct xen_netbk *xen_netbk; @@ -608,12 +613,31 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) int count; unsigned long offset; struct skb_cb_overlay *sco; + struct gnttab_copy *gco = get_cpu_var(grant_copy_op); + struct netbk_rx_meta *m = get_cpu_var(meta); + static int unusable_count; struct netrx_pending_operations npo = { - .copy = netbk->grant_copy_op, - .meta = netbk->meta, + .copy = gco, + .meta = m, }; + if (gco == NULL || m == NULL) { + put_cpu_var(grant_copy_op); + put_cpu_var(meta); + if (unusable_count == 1000) { + printk(KERN_ALERT + "xen-netback: " + "CPU %d scratch space is not available," + " not doing any TX work for netback/%d\n", + smp_processor_id(), + (int)(netbk - xen_netbk)); + unusable_count = 0; + } else + unusable_count++; + return; + } + skb_queue_head_init(&rxq); count = 0; @@ -635,27 +659,30 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) break; } - BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)); + BUG_ON(npo.meta_prod > META_SIZE); - if (!npo.copy_prod) + if (!npo.copy_prod) { + put_cpu_var(grant_copy_op); + put_cpu_var(meta); return; + } - BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op)); - gnttab_batch_copy(netbk->grant_copy_op, npo.copy_prod); + BUG_ON(npo.copy_prod > GRANT_COPY_OP_SIZE); + gnttab_batch_copy(gco, npo.copy_prod); while ((skb = __skb_dequeue(&rxq)) != NULL) { sco = (struct skb_cb_overlay *)skb->cb; vif = netdev_priv(skb->dev); - if (netbk->meta[npo.meta_cons].gso_size && vif->gso_prefix) { + if (m[npo.meta_cons].gso_size && vif->gso_prefix) { resp = RING_GET_RESPONSE(&vif->rx, vif->rx.rsp_prod_pvt++); resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data; - resp->offset = netbk->meta[npo.meta_cons].gso_size; - resp->id = netbk->meta[npo.meta_cons].id; + resp->offset = m[npo.meta_cons].gso_size; + resp->id = m[npo.meta_cons].id; resp->status = sco->meta_slots_used; npo.meta_cons++; @@ -680,12 +707,12 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) flags |= XEN_NETRXF_data_validated; offset = 0; - resp = make_rx_response(vif, netbk->meta[npo.meta_cons].id, + resp = make_rx_response(vif, m[npo.meta_cons].id, status, offset, - netbk->meta[npo.meta_cons].size, + m[npo.meta_cons].size, flags); - if (netbk->meta[npo.meta_cons].gso_size && !vif->gso_prefix) { + if (m[npo.meta_cons].gso_size && !vif->gso_prefix) { struct xen_netif_extra_info *gso (struct xen_netif_extra_info *) RING_GET_RESPONSE(&vif->rx, @@ -693,7 +720,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) resp->flags |= XEN_NETRXF_extra_info; - gso->u.gso.size = netbk->meta[npo.meta_cons].gso_size; + gso->u.gso.size = m[npo.meta_cons].gso_size; gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4; gso->u.gso.pad = 0; gso->u.gso.features = 0; @@ -703,7 +730,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) } netbk_add_frag_responses(vif, status, - netbk->meta + npo.meta_cons + 1, + m + npo.meta_cons + 1, sco->meta_slots_used); RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); @@ -726,6 +753,9 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) if (!skb_queue_empty(&netbk->rx_queue) && !timer_pending(&netbk->net_timer)) xen_netbk_kick_thread(netbk); + + put_cpu_var(grant_copy_op); + put_cpu_var(meta); } void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb) @@ -1351,9 +1381,10 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) return false; } -static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) +static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, + struct gnttab_copy *tco) { - struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop; + struct gnttab_copy *gop = tco, *request_gop; struct sk_buff *skb; int ret; @@ -1531,16 +1562,17 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) vif->tx.req_cons = idx; xen_netbk_check_rx_xenvif(vif); - if ((gop-netbk->tx_copy_ops) >= ARRAY_SIZE(netbk->tx_copy_ops)) + if ((gop-tco) >= TX_COPY_OPS_SIZE) break; } - return gop - netbk->tx_copy_ops; + return gop - tco; } -static void xen_netbk_tx_submit(struct xen_netbk *netbk) +static void xen_netbk_tx_submit(struct xen_netbk *netbk, + struct gnttab_copy *tco) { - struct gnttab_copy *gop = netbk->tx_copy_ops; + struct gnttab_copy *gop = tco; struct sk_buff *skb; while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) { @@ -1615,15 +1647,37 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk) static void xen_netbk_tx_action(struct xen_netbk *netbk) { unsigned nr_gops; + struct gnttab_copy *tco; + static int unusable_count; + + tco = get_cpu_var(tx_copy_ops); + + if (tco == NULL) { + put_cpu_var(tx_copy_ops); + if (unusable_count == 1000) { + printk(KERN_ALERT + "xen-netback: " + "CPU %d scratch space is not available," + " not doing any RX work for netback/%d\n", + smp_processor_id(), + (int)(netbk - xen_netbk)); + } else + unusable_count++; + return; + } - nr_gops = xen_netbk_tx_build_gops(netbk); + nr_gops = xen_netbk_tx_build_gops(netbk, tco); - if (nr_gops == 0) + if (nr_gops == 0) { + put_cpu_var(tx_copy_ops); return; + } + + gnttab_batch_copy(tco, nr_gops); - gnttab_batch_copy(netbk->tx_copy_ops, nr_gops); + xen_netbk_tx_submit(netbk, tco); - xen_netbk_tx_submit(netbk); + put_cpu_var(tx_copy_ops); } static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, @@ -1760,6 +1814,93 @@ static int xen_netbk_kthread(void *data) return 0; } +static int __create_percpu_scratch_space(unsigned int cpu) +{ + if (per_cpu(tx_copy_ops, cpu) || + per_cpu(grant_copy_op, cpu) || + per_cpu(meta, cpu)) + return 0; + + per_cpu(tx_copy_ops, cpu) + vzalloc_node(sizeof(struct gnttab_copy) * TX_COPY_OPS_SIZE, + cpu_to_node(cpu)); + + per_cpu(grant_copy_op, cpu) + vzalloc_node(sizeof(struct gnttab_copy) * GRANT_COPY_OP_SIZE, + cpu_to_node(cpu)); + + per_cpu(meta, cpu) + vzalloc_node(sizeof(struct netbk_rx_meta) * META_SIZE, + cpu_to_node(cpu)); + + if (!per_cpu(tx_copy_ops, cpu) || + !per_cpu(grant_copy_op, cpu) || + !per_cpu(meta, cpu)) + return -ENOMEM; + + return 0; +} + +static void __free_percpu_scratch_space(unsigned int cpu) +{ + void *tmp; + + tmp = per_cpu(tx_copy_ops, cpu); + per_cpu(tx_copy_ops, cpu) = NULL; + vfree(tmp); + + tmp = per_cpu(grant_copy_op, cpu); + per_cpu(grant_copy_op, cpu) = NULL; + vfree(tmp); + + tmp = per_cpu(meta, cpu); + per_cpu(meta, cpu) = NULL; + vfree(tmp); +} + +static int __netback_percpu_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + int rc = NOTIFY_DONE; + + switch (action) { + case CPU_ONLINE: + case CPU_ONLINE_FROZEN: + printk(KERN_INFO "xen-netback: CPU %d online, creating scratch space\n", + cpu); + rc = __create_percpu_scratch_space(cpu); + if (rc) { + printk(KERN_ALERT "xen-netback: failed to create scratch space for CPU %d\n", + cpu); + /* There is really nothing more we can do. Free any + * partially allocated scratch space. When processing + * routines get to run they will just print warning + * message and stop processing. + */ + __free_percpu_scratch_space(cpu); + rc = NOTIFY_BAD; + } else + rc = NOTIFY_OK; + break; + case CPU_DEAD: + case CPU_DEAD_FROZEN: + printk(KERN_INFO "xen-netback: CPU %d offline, destroying scratch space\n", + cpu); + __free_percpu_scratch_space(cpu); + rc = NOTIFY_OK; + break; + default: + break; + } + + return rc; +} + +static struct notifier_block netback_notifier_block = { + .notifier_call = __netback_percpu_callback, +}; + void xen_netbk_unmap_frontend_rings(struct xenvif *vif) { if (vif->tx.sring) @@ -1810,6 +1951,7 @@ static int __init netback_init(void) int i; int rc = 0; int group; + int cpu; if (!xen_domain()) return -ENODEV; @@ -1821,10 +1963,21 @@ static int __init netback_init(void) fatal_skb_slots = XEN_NETBK_LEGACY_SLOTS_MAX; } + for_each_online_cpu(cpu) { + rc = __create_percpu_scratch_space(cpu); + if (rc) { + rc = -ENOMEM; + goto failed_init; + } + } + register_hotcpu_notifier(&netback_notifier_block); + xen_netbk_group_nr = num_online_cpus(); xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr); - if (!xen_netbk) - return -ENOMEM; + if (!xen_netbk) { + goto failed_init; + rc = -ENOMEM; + } for (group = 0; group < xen_netbk_group_nr; group++) { struct xen_netbk *netbk = &xen_netbk[group]; @@ -1849,7 +2002,7 @@ static int __init netback_init(void) printk(KERN_ALERT "kthread_create() fails at netback\n"); del_timer(&netbk->net_timer); rc = PTR_ERR(netbk->task); - goto failed_init; + goto failed_init_destroy_kthreads; } kthread_bind(netbk->task, group); @@ -1865,17 +2018,20 @@ static int __init netback_init(void) rc = xenvif_xenbus_init(); if (rc) - goto failed_init; + goto failed_init_destroy_kthreads; return 0; -failed_init: +failed_init_destroy_kthreads: while (--group >= 0) { struct xen_netbk *netbk = &xen_netbk[group]; del_timer(&netbk->net_timer); kthread_stop(netbk->task); } vfree(xen_netbk); +failed_init: + for_each_online_cpu(cpu) + __free_percpu_scratch_space(cpu); return rc; } @@ -1899,6 +2055,11 @@ static void __exit netback_fini(void) } vfree(xen_netbk); + + unregister_hotcpu_notifier(&netback_notifier_block); + + for_each_online_cpu(i) + __free_percpu_scratch_space(i); } module_exit(netback_fini); -- 1.7.10.4
This patch implements 1:1 model netback. NAPI and kthread are utilized to do the weight-lifting job: - NAPI is used for guest side TX (host side RX) - kthread is used for guest side RX (host side TX) Xenvif and xen_netbk are made into one structure to reduce code size. This model provides better scheduling fairness among vifs. It is also prerequisite for implementing multiqueue for Xen netback. Signed-off-by: Wei Liu <wei.liu2@citrix.com> --- drivers/net/xen-netback/common.h | 92 +++-- drivers/net/xen-netback/interface.c | 122 +++--- drivers/net/xen-netback/netback.c | 719 ++++++++++++----------------------- 3 files changed, 373 insertions(+), 560 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 8a4d77e..5920d6c 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -45,15 +45,43 @@ #include <xen/grant_table.h> #include <xen/xenbus.h> -struct xen_netbk; +typedef unsigned int pending_ring_idx_t; +#define INVALID_PENDING_RING_IDX (~0U) + +struct pending_tx_info { + struct xen_netif_tx_request req; /* coalesced tx request */ + pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX + * if it is head of one or more tx + * reqs + */ +}; + +#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE) +#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE) + +struct xenvif_rx_meta { + int id; + int size; + int gso_size; +}; + +/* Discriminate from any valid pending_idx value. */ +#define INVALID_PENDING_IDX 0xFFFF + +#define MAX_BUFFER_OFFSET PAGE_SIZE + +#define MAX_PENDING_REQS 256 struct xenvif { /* Unique identifier for this interface. */ domid_t domid; unsigned int handle; - /* Reference to netback processing backend. */ - struct xen_netbk *netbk; + /* Use NAPI for guest TX */ + struct napi_struct napi; + /* Use kthread for guest RX */ + struct task_struct *task; + wait_queue_head_t wq; u8 fe_dev_addr[6]; @@ -64,9 +92,6 @@ struct xenvif { char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ - /* List of frontends to notify after a batch of frames sent. */ - struct list_head notify_list; - /* The shared rings and indexes. */ struct xen_netif_tx_back_ring tx; struct xen_netif_rx_back_ring rx; @@ -96,12 +121,20 @@ struct xenvif { /* Statistics */ unsigned long rx_gso_checksum_fixup; + struct sk_buff_head rx_queue; + struct sk_buff_head tx_queue; + + struct page *mmap_pages[MAX_PENDING_REQS]; + + pending_ring_idx_t pending_prod; + pending_ring_idx_t pending_cons; + + struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; + + u16 pending_ring[MAX_PENDING_REQS]; + /* Miscellaneous private stuff. */ - struct list_head schedule_list; - atomic_t refcnt; struct net_device *dev; - - wait_queue_head_t waiting_to_free; }; static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif) @@ -109,9 +142,6 @@ static inline struct xenbus_device *xenvif_to_xenbus_device(struct xenvif *vif) return to_xenbus_device(vif->dev->dev.parent); } -#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE) -#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE) - struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, unsigned int handle); @@ -121,39 +151,26 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, unsigned int rx_evtchn); void xenvif_disconnect(struct xenvif *vif); -void xenvif_get(struct xenvif *vif); -void xenvif_put(struct xenvif *vif); - int xenvif_xenbus_init(void); void xenvif_xenbus_fini(void); int xenvif_schedulable(struct xenvif *vif); -int xen_netbk_rx_ring_full(struct xenvif *vif); +int xenvif_rx_ring_full(struct xenvif *vif); -int xen_netbk_must_stop_queue(struct xenvif *vif); +int xenvif_must_stop_queue(struct xenvif *vif); /* (Un)Map communication rings. */ -void xen_netbk_unmap_frontend_rings(struct xenvif *vif); -int xen_netbk_map_frontend_rings(struct xenvif *vif, - grant_ref_t tx_ring_ref, - grant_ref_t rx_ring_ref); - -/* (De)Register a xenvif with the netback backend. */ -void xen_netbk_add_xenvif(struct xenvif *vif); -void xen_netbk_remove_xenvif(struct xenvif *vif); - -/* (De)Schedule backend processing for a xenvif */ -void xen_netbk_schedule_xenvif(struct xenvif *vif); -void xen_netbk_deschedule_xenvif(struct xenvif *vif); +void xenvif_unmap_frontend_rings(struct xenvif *vif); +int xenvif_map_frontend_rings(struct xenvif *vif, + grant_ref_t tx_ring_ref, + grant_ref_t rx_ring_ref); /* Check for SKBs from frontend and schedule backend processing */ -void xen_netbk_check_rx_xenvif(struct xenvif *vif); -/* Receive an SKB from the frontend */ -void xenvif_receive_skb(struct xenvif *vif, struct sk_buff *skb); +void xenvif_check_rx_xenvif(struct xenvif *vif); /* Queue an SKB for transmission to the frontend */ -void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb); +void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb); /* Notify xenvif that ring now has space to send an skb to the frontend */ void xenvif_notify_tx_completion(struct xenvif *vif); @@ -161,7 +178,12 @@ void xenvif_notify_tx_completion(struct xenvif *vif); void xenvif_carrier_off(struct xenvif *vif); /* Returns number of ring slots required to send an skb to the frontend */ -unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb); +unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb); + +int xenvif_tx_action(struct xenvif *vif, int budget); +void xenvif_rx_action(struct xenvif *vif); + +int xenvif_kthread(void *data); extern bool separate_tx_rx_irq; diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 087d2db..3d30e93 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -30,6 +30,7 @@ #include "common.h" +#include <linux/kthread.h> #include <linux/ethtool.h> #include <linux/rtnetlink.h> #include <linux/if_vlan.h> @@ -38,17 +39,7 @@ #include <asm/xen/hypercall.h> #define XENVIF_QUEUE_LENGTH 32 - -void xenvif_get(struct xenvif *vif) -{ - atomic_inc(&vif->refcnt); -} - -void xenvif_put(struct xenvif *vif) -{ - if (atomic_dec_and_test(&vif->refcnt)) - wake_up(&vif->waiting_to_free); -} +#define XENVIF_NAPI_WEIGHT 64 int xenvif_schedulable(struct xenvif *vif) { @@ -57,28 +48,46 @@ int xenvif_schedulable(struct xenvif *vif) static int xenvif_rx_schedulable(struct xenvif *vif) { - return xenvif_schedulable(vif) && !xen_netbk_rx_ring_full(vif); + return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif); } static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id) { struct xenvif *vif = dev_id; - if (vif->netbk == NULL) - return IRQ_HANDLED; - - xen_netbk_schedule_xenvif(vif); + if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) + napi_schedule(&vif->napi); return IRQ_HANDLED; } +static int xenvif_poll(struct napi_struct *napi, int budget) +{ + struct xenvif *vif = container_of(napi, struct xenvif, napi); + int work_done; + + work_done = xenvif_tx_action(vif, budget); + + if (work_done < budget) { + int more_to_do = 0; + unsigned long flags; + + local_irq_save(flags); + + RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do); + if (!more_to_do || work_done < 0) + __napi_complete(napi); + + local_irq_restore(flags); + } + + return work_done; +} + static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id) { struct xenvif *vif = dev_id; - if (vif->netbk == NULL) - return IRQ_HANDLED; - if (xenvif_rx_schedulable(vif)) netif_wake_queue(vif->dev); @@ -99,7 +108,8 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) BUG_ON(skb->dev != dev); - if (vif->netbk == NULL) + /* Drop the packet if vif is not ready */ + if (vif->task == NULL) goto drop; /* Drop the packet if the target domain has no receive buffers. */ @@ -107,13 +117,12 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) goto drop; /* Reserve ring slots for the worst-case number of fragments. */ - vif->rx_req_cons_peek += xen_netbk_count_skb_slots(vif, skb); - xenvif_get(vif); + vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb); - if (vif->can_queue && xen_netbk_must_stop_queue(vif)) + if (vif->can_queue && xenvif_must_stop_queue(vif)) netif_stop_queue(dev); - xen_netbk_queue_tx_skb(vif, skb); + xenvif_queue_tx_skb(vif, skb); return NETDEV_TX_OK; @@ -123,11 +132,6 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) return NETDEV_TX_OK; } -void xenvif_receive_skb(struct xenvif *vif, struct sk_buff *skb) -{ - netif_rx_ni(skb); -} - void xenvif_notify_tx_completion(struct xenvif *vif) { if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif)) @@ -142,21 +146,20 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev) static void xenvif_up(struct xenvif *vif) { - xen_netbk_add_xenvif(vif); + napi_enable(&vif->napi); enable_irq(vif->tx_irq); if (vif->tx_irq != vif->rx_irq) enable_irq(vif->rx_irq); - xen_netbk_check_rx_xenvif(vif); + xenvif_check_rx_xenvif(vif); } static void xenvif_down(struct xenvif *vif) { + napi_disable(&vif->napi); disable_irq(vif->tx_irq); if (vif->tx_irq != vif->rx_irq) disable_irq(vif->rx_irq); del_timer_sync(&vif->credit_timeout); - xen_netbk_deschedule_xenvif(vif); - xen_netbk_remove_xenvif(vif); } static int xenvif_open(struct net_device *dev) @@ -272,11 +275,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, struct net_device *dev; struct xenvif *vif; char name[IFNAMSIZ] = {}; + int i; snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle); dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup); if (dev == NULL) { - pr_warn("Could not allocate netdev\n"); + printk(KERN_WARNING "xen-netback: Could not allocate netdev for vif%d.%d\n", + domid, handle); return ERR_PTR(-ENOMEM); } @@ -285,14 +290,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, vif = netdev_priv(dev); vif->domid = domid; vif->handle = handle; - vif->netbk = NULL; vif->can_sg = 1; vif->csum = 1; - atomic_set(&vif->refcnt, 1); - init_waitqueue_head(&vif->waiting_to_free); vif->dev = dev; - INIT_LIST_HEAD(&vif->schedule_list); - INIT_LIST_HEAD(&vif->notify_list); vif->credit_bytes = vif->remaining_credit = ~0UL; vif->credit_usec = 0UL; @@ -307,6 +307,16 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, dev->tx_queue_len = XENVIF_QUEUE_LENGTH; + skb_queue_head_init(&vif->rx_queue); + skb_queue_head_init(&vif->tx_queue); + + vif->pending_cons = 0; + vif->pending_prod = MAX_PENDING_REQS; + for (i = 0; i < MAX_PENDING_REQS; i++) + vif->pending_ring[i] = i; + for (i = 0; i < MAX_PENDING_REQS; i++) + vif->mmap_pages[i] = NULL; + /* * Initialise a dummy MAC address. We choose the numerically * largest non-broadcast address to prevent the address getting @@ -316,6 +326,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, memset(dev->dev_addr, 0xFF, ETH_ALEN); dev->dev_addr[0] &= ~0x01; + netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT); + netif_carrier_off(dev); err = register_netdev(dev); @@ -341,7 +353,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, __module_get(THIS_MODULE); - err = xen_netbk_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref); + err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref); if (err < 0) goto err; @@ -377,7 +389,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, disable_irq(vif->rx_irq); } - xenvif_get(vif); + init_waitqueue_head(&vif->wq); + vif->task = kthread_create(xenvif_kthread, + (void *)vif, + "vif%d.%d", vif->domid, vif->handle); + if (IS_ERR(vif->task)) { + printk(KERN_WARNING "xen-netback: Could not allocate kthread for vif%d.%d\n", + vif->domid, vif->handle); + err = PTR_ERR(vif->task); + goto err_rx_unbind; + } rtnl_lock(); if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN) @@ -388,12 +409,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, xenvif_up(vif); rtnl_unlock(); + wake_up_process(vif->task); + return 0; + +err_rx_unbind: + unbind_from_irqhandler(vif->rx_irq, vif); + vif->rx_irq = 0; err_tx_unbind: unbind_from_irqhandler(vif->tx_irq, vif); vif->tx_irq = 0; err_unmap: - xen_netbk_unmap_frontend_rings(vif); + xenvif_unmap_frontend_rings(vif); err: module_put(THIS_MODULE); return err; @@ -408,7 +435,6 @@ void xenvif_carrier_off(struct xenvif *vif) if (netif_running(dev)) xenvif_down(vif); rtnl_unlock(); - xenvif_put(vif); } void xenvif_disconnect(struct xenvif *vif) @@ -422,9 +448,6 @@ void xenvif_disconnect(struct xenvif *vif) if (netif_carrier_ok(vif->dev)) xenvif_carrier_off(vif); - atomic_dec(&vif->refcnt); - wait_event(vif->waiting_to_free, atomic_read(&vif->refcnt) == 0); - if (vif->tx_irq) { if (vif->tx_irq == vif->rx_irq) unbind_from_irqhandler(vif->tx_irq, vif); @@ -438,9 +461,14 @@ void xenvif_disconnect(struct xenvif *vif) need_module_put = 1; } + if (vif->task) + kthread_stop(vif->task); + + netif_napi_del(&vif->napi); + unregister_netdev(vif->dev); - xen_netbk_unmap_frontend_rings(vif); + xenvif_unmap_frontend_rings(vif); free_netdev(vif->dev); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 0f69eda..92c5a50 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -71,31 +71,6 @@ module_param(fatal_skb_slots, uint, 0444); */ #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN -typedef unsigned int pending_ring_idx_t; -#define INVALID_PENDING_RING_IDX (~0U) - -struct pending_tx_info { - struct xen_netif_tx_request req; /* coalesced tx request */ - struct xenvif *vif; - pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX - * if it is head of one or more tx - * reqs - */ -}; - -struct netbk_rx_meta { - int id; - int size; - int gso_size; -}; - -#define MAX_PENDING_REQS 256 - -/* Discriminate from any valid pending_idx value. */ -#define INVALID_PENDING_IDX 0xFFFF - -#define MAX_BUFFER_OFFSET PAGE_SIZE - /* Coalescing tx requests before copying makes number of grant * copy ops greater or equal to number of slots required. In * worst case a tx request consumes 2 gnttab_copy. So the size @@ -112,79 +87,27 @@ DEFINE_PER_CPU(struct gnttab_copy *, tx_copy_ops); #define GRANT_COPY_OP_SIZE (2*XEN_NETIF_RX_RING_SIZE) #define META_SIZE (2*XEN_NETIF_RX_RING_SIZE) DEFINE_PER_CPU(struct gnttab_copy *, grant_copy_op); -DEFINE_PER_CPU(struct netbk_rx_meta *, meta); - -struct xen_netbk { - wait_queue_head_t wq; - struct task_struct *task; - - struct sk_buff_head rx_queue; - struct sk_buff_head tx_queue; - - struct timer_list net_timer; - - struct page *mmap_pages[MAX_PENDING_REQS]; - - pending_ring_idx_t pending_prod; - pending_ring_idx_t pending_cons; - struct list_head net_schedule_list; - - /* Protect the net_schedule_list in netif. */ - spinlock_t net_schedule_list_lock; - - atomic_t netfront_count; - - struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; - u16 pending_ring[MAX_PENDING_REQS]; -}; - -static struct xen_netbk *xen_netbk; -static int xen_netbk_group_nr; +DEFINE_PER_CPU(struct xenvif_rx_meta *, meta); /* * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of * one or more merged tx requests, otherwise it is the continuation of * previous tx request. */ -static inline int pending_tx_is_head(struct xen_netbk *netbk, RING_IDX idx) +static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx) { - return netbk->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX; + return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX; } -void xen_netbk_add_xenvif(struct xenvif *vif) -{ - int i; - int min_netfront_count; - int min_group = 0; - struct xen_netbk *netbk; - - min_netfront_count = atomic_read(&xen_netbk[0].netfront_count); - for (i = 0; i < xen_netbk_group_nr; i++) { - int netfront_count = atomic_read(&xen_netbk[i].netfront_count); - if (netfront_count < min_netfront_count) { - min_group = i; - min_netfront_count = netfront_count; - } - } - - netbk = &xen_netbk[min_group]; - - vif->netbk = netbk; - atomic_inc(&netbk->netfront_count); -} - -void xen_netbk_remove_xenvif(struct xenvif *vif) -{ - struct xen_netbk *netbk = vif->netbk; - vif->netbk = NULL; - atomic_dec(&netbk->netfront_count); -} - -static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, - u8 status); +static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx, + u8 status); static void make_tx_response(struct xenvif *vif, struct xen_netif_tx_request *txp, s8 st); + +static inline int tx_work_todo(struct xenvif *vif); +static inline int rx_work_todo(struct xenvif *vif); + static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, u16 id, s8 st, @@ -192,16 +115,16 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, u16 size, u16 flags); -static inline unsigned long idx_to_pfn(struct xen_netbk *netbk, +static inline unsigned long idx_to_pfn(struct xenvif *vif, u16 idx) { - return page_to_pfn(netbk->mmap_pages[idx]); + return page_to_pfn(vif->mmap_pages[idx]); } -static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk, +static inline unsigned long idx_to_kaddr(struct xenvif *vif, u16 idx) { - return (unsigned long)pfn_to_kaddr(idx_to_pfn(netbk, idx)); + return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx)); } /* @@ -229,15 +152,10 @@ static inline pending_ring_idx_t pending_index(unsigned i) return i & (MAX_PENDING_REQS-1); } -static inline pending_ring_idx_t nr_pending_reqs(struct xen_netbk *netbk) +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif) { return MAX_PENDING_REQS - - netbk->pending_prod + netbk->pending_cons; -} - -static void xen_netbk_kick_thread(struct xen_netbk *netbk) -{ - wake_up(&netbk->wq); + vif->pending_prod + vif->pending_cons; } static int max_required_rx_slots(struct xenvif *vif) @@ -251,7 +169,7 @@ static int max_required_rx_slots(struct xenvif *vif) return max; } -int xen_netbk_rx_ring_full(struct xenvif *vif) +int xenvif_rx_ring_full(struct xenvif *vif) { RING_IDX peek = vif->rx_req_cons_peek; RING_IDX needed = max_required_rx_slots(vif); @@ -260,16 +178,16 @@ int xen_netbk_rx_ring_full(struct xenvif *vif) ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed); } -int xen_netbk_must_stop_queue(struct xenvif *vif) +int xenvif_must_stop_queue(struct xenvif *vif) { - if (!xen_netbk_rx_ring_full(vif)) + if (!xenvif_rx_ring_full(vif)) return 0; vif->rx.sring->req_event = vif->rx_req_cons_peek + max_required_rx_slots(vif); mb(); /* request notification /then/ check the queue */ - return xen_netbk_rx_ring_full(vif); + return xenvif_rx_ring_full(vif); } /* @@ -315,9 +233,9 @@ static bool start_new_rx_buffer(int offset, unsigned long size, int head) /* * Figure out how many ring slots we''re going to need to send @skb to * the guest. This function is essentially a dry run of - * netbk_gop_frag_copy. + * xenvif_gop_frag_copy. */ -unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) +unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) { unsigned int count; int i, copy_off; @@ -369,15 +287,15 @@ struct netrx_pending_operations { unsigned copy_prod, copy_cons; unsigned meta_prod, meta_cons; struct gnttab_copy *copy; - struct netbk_rx_meta *meta; + struct xenvif_rx_meta *meta; int copy_off; grant_ref_t copy_gref; }; -static struct netbk_rx_meta *get_next_rx_buffer(struct xenvif *vif, - struct netrx_pending_operations *npo) +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif, + struct netrx_pending_operations *npo) { - struct netbk_rx_meta *meta; + struct xenvif_rx_meta *meta; struct xen_netif_rx_request *req; req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); @@ -397,13 +315,13 @@ static struct netbk_rx_meta *get_next_rx_buffer(struct xenvif *vif, * Set up the grant operations for this fragment. If it''s a flipping * interface, we also set up the unmap request from here. */ -static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, - struct netrx_pending_operations *npo, - struct page *page, unsigned long size, - unsigned long offset, int *head) +static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, + struct netrx_pending_operations *npo, + struct page *page, unsigned long size, + unsigned long offset, int *head) { struct gnttab_copy *copy_gop; - struct netbk_rx_meta *meta; + struct xenvif_rx_meta *meta; unsigned long bytes; /* Data must not cross a page boundary. */ @@ -439,9 +357,9 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, copy_gop = npo->copy + npo->copy_prod++; copy_gop->flags = GNTCOPY_dest_gref; + copy_gop->source.domid = DOMID_SELF; copy_gop->source.u.gmfn = virt_to_mfn(page_address(page)); - copy_gop->source.offset = offset; copy_gop->dest.domid = vif->domid; @@ -483,14 +401,14 @@ static void netbk_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, * zero GSO descriptors (for non-GSO packets) or one descriptor (for * frontend-side LRO). */ -static int netbk_gop_skb(struct sk_buff *skb, - struct netrx_pending_operations *npo) +static int xenvif_gop_skb(struct sk_buff *skb, + struct netrx_pending_operations *npo) { struct xenvif *vif = netdev_priv(skb->dev); int nr_frags = skb_shinfo(skb)->nr_frags; int i; struct xen_netif_rx_request *req; - struct netbk_rx_meta *meta; + struct xenvif_rx_meta *meta; unsigned char *data; int head = 1; int old_meta_prod; @@ -527,30 +445,30 @@ static int netbk_gop_skb(struct sk_buff *skb, if (data + len > skb_tail_pointer(skb)) len = skb_tail_pointer(skb) - data; - netbk_gop_frag_copy(vif, skb, npo, - virt_to_page(data), len, offset, &head); + xenvif_gop_frag_copy(vif, skb, npo, + virt_to_page(data), len, offset, &head); data += len; } for (i = 0; i < nr_frags; i++) { - netbk_gop_frag_copy(vif, skb, npo, - skb_frag_page(&skb_shinfo(skb)->frags[i]), - skb_frag_size(&skb_shinfo(skb)->frags[i]), - skb_shinfo(skb)->frags[i].page_offset, - &head); + xenvif_gop_frag_copy(vif, skb, npo, + skb_frag_page(&skb_shinfo(skb)->frags[i]), + skb_frag_size(&skb_shinfo(skb)->frags[i]), + skb_shinfo(skb)->frags[i].page_offset, + &head); } return npo->meta_prod - old_meta_prod; } /* - * This is a twin to netbk_gop_skb. Assume that netbk_gop_skb was + * This is a twin to xenvif_gop_skb. Assume that xenvif_gop_skb was * used to set up the operations on the top of * netrx_pending_operations, which have since been done. Check that * they didn''t give any errors and advance over them. */ -static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots, - struct netrx_pending_operations *npo) +static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots, + struct netrx_pending_operations *npo) { struct gnttab_copy *copy_op; int status = XEN_NETIF_RSP_OKAY; @@ -569,9 +487,9 @@ static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots, return status; } -static void netbk_add_frag_responses(struct xenvif *vif, int status, - struct netbk_rx_meta *meta, - int nr_meta_slots) +static void xenvif_add_frag_responses(struct xenvif *vif, int status, + struct xenvif_rx_meta *meta, + int nr_meta_slots) { int i; unsigned long offset; @@ -599,9 +517,13 @@ struct skb_cb_overlay { int meta_slots_used; }; -static void xen_netbk_rx_action(struct xen_netbk *netbk) +static void xenvif_kick_thread(struct xenvif *vif) +{ + wake_up(&vif->wq); +} + +void xenvif_rx_action(struct xenvif *vif) { - struct xenvif *vif = NULL, *tmp; s8 status; u16 flags; struct xen_netif_rx_response *resp; @@ -614,8 +536,9 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) unsigned long offset; struct skb_cb_overlay *sco; struct gnttab_copy *gco = get_cpu_var(grant_copy_op); - struct netbk_rx_meta *m = get_cpu_var(meta); + struct xenvif_rx_meta *m = get_cpu_var(meta); static int unusable_count; + int need_to_notify = 0; struct netrx_pending_operations npo = { .copy = gco, @@ -629,9 +552,9 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) printk(KERN_ALERT "xen-netback: " "CPU %d scratch space is not available," - " not doing any TX work for netback/%d\n", + " not doing any TX work for vif%d.%d\n", smp_processor_id(), - (int)(netbk - xen_netbk)); + vif->domid, vif->handle); unusable_count = 0; } else unusable_count++; @@ -642,12 +565,12 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) count = 0; - while ((skb = skb_dequeue(&netbk->rx_queue)) != NULL) { + while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) { vif = netdev_priv(skb->dev); nr_frags = skb_shinfo(skb)->nr_frags; sco = (struct skb_cb_overlay *)skb->cb; - sco->meta_slots_used = netbk_gop_skb(skb, &npo); + sco->meta_slots_used = xenvif_gop_skb(skb, &npo); count += nr_frags + 1; @@ -693,7 +616,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) vif->dev->stats.tx_bytes += skb->len; vif->dev->stats.tx_packets++; - status = netbk_check_gop(vif, sco->meta_slots_used, &npo); + status = xenvif_check_gop(vif, sco->meta_slots_used, &npo); if (sco->meta_slots_used == 1) flags = 0; @@ -729,124 +652,46 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) gso->flags = 0; } - netbk_add_frag_responses(vif, status, - m + npo.meta_cons + 1, - sco->meta_slots_used); + xenvif_add_frag_responses(vif, status, + m + npo.meta_cons + 1, + sco->meta_slots_used); RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); - if (ret && list_empty(&vif->notify_list)) - list_add_tail(&vif->notify_list, ¬ify); + if (ret) + need_to_notify = 1; xenvif_notify_tx_completion(vif); - xenvif_put(vif); npo.meta_cons += sco->meta_slots_used; dev_kfree_skb(skb); } - list_for_each_entry_safe(vif, tmp, ¬ify, notify_list) { + if (need_to_notify) notify_remote_via_irq(vif->rx_irq); - list_del_init(&vif->notify_list); - } /* More work to do? */ - if (!skb_queue_empty(&netbk->rx_queue) && - !timer_pending(&netbk->net_timer)) - xen_netbk_kick_thread(netbk); + if (!skb_queue_empty(&vif->rx_queue)) + xenvif_kick_thread(vif); put_cpu_var(grant_copy_op); put_cpu_var(meta); } -void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb) +void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb) { - struct xen_netbk *netbk = vif->netbk; + skb_queue_tail(&vif->rx_queue, skb); - skb_queue_tail(&netbk->rx_queue, skb); - - xen_netbk_kick_thread(netbk); -} - -static void xen_netbk_alarm(unsigned long data) -{ - struct xen_netbk *netbk = (struct xen_netbk *)data; - xen_netbk_kick_thread(netbk); -} - -static int __on_net_schedule_list(struct xenvif *vif) -{ - return !list_empty(&vif->schedule_list); -} - -/* Must be called with net_schedule_list_lock held */ -static void remove_from_net_schedule_list(struct xenvif *vif) -{ - if (likely(__on_net_schedule_list(vif))) { - list_del_init(&vif->schedule_list); - xenvif_put(vif); - } -} - -static struct xenvif *poll_net_schedule_list(struct xen_netbk *netbk) -{ - struct xenvif *vif = NULL; - - spin_lock_irq(&netbk->net_schedule_list_lock); - if (list_empty(&netbk->net_schedule_list)) - goto out; - - vif = list_first_entry(&netbk->net_schedule_list, - struct xenvif, schedule_list); - if (!vif) - goto out; - - xenvif_get(vif); - - remove_from_net_schedule_list(vif); -out: - spin_unlock_irq(&netbk->net_schedule_list_lock); - return vif; -} - -void xen_netbk_schedule_xenvif(struct xenvif *vif) -{ - unsigned long flags; - struct xen_netbk *netbk = vif->netbk; - - if (__on_net_schedule_list(vif)) - goto kick; - - spin_lock_irqsave(&netbk->net_schedule_list_lock, flags); - if (!__on_net_schedule_list(vif) && - likely(xenvif_schedulable(vif))) { - list_add_tail(&vif->schedule_list, &netbk->net_schedule_list); - xenvif_get(vif); - } - spin_unlock_irqrestore(&netbk->net_schedule_list_lock, flags); - -kick: - smp_mb(); - if ((nr_pending_reqs(netbk) < (MAX_PENDING_REQS/2)) && - !list_empty(&netbk->net_schedule_list)) - xen_netbk_kick_thread(netbk); + xenvif_kick_thread(vif); } -void xen_netbk_deschedule_xenvif(struct xenvif *vif) -{ - struct xen_netbk *netbk = vif->netbk; - spin_lock_irq(&netbk->net_schedule_list_lock); - remove_from_net_schedule_list(vif); - spin_unlock_irq(&netbk->net_schedule_list_lock); -} - -void xen_netbk_check_rx_xenvif(struct xenvif *vif) +void xenvif_check_rx_xenvif(struct xenvif *vif) { int more_to_do; RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do); if (more_to_do) - xen_netbk_schedule_xenvif(vif); + napi_schedule(&vif->napi); } static void tx_add_credit(struct xenvif *vif) @@ -873,11 +718,11 @@ static void tx_credit_callback(unsigned long data) { struct xenvif *vif = (struct xenvif *)data; tx_add_credit(vif); - xen_netbk_check_rx_xenvif(vif); + xenvif_check_rx_xenvif(vif); } -static void netbk_tx_err(struct xenvif *vif, - struct xen_netif_tx_request *txp, RING_IDX end) +static void xenvif_tx_err(struct xenvif *vif, + struct xen_netif_tx_request *txp, RING_IDX end) { RING_IDX cons = vif->tx.req_cons; @@ -888,21 +733,18 @@ static void netbk_tx_err(struct xenvif *vif, txp = RING_GET_REQUEST(&vif->tx, cons++); } while (1); vif->tx.req_cons = cons; - xen_netbk_check_rx_xenvif(vif); - xenvif_put(vif); } -static void netbk_fatal_tx_err(struct xenvif *vif) +static void xenvif_fatal_tx_err(struct xenvif *vif) { netdev_err(vif->dev, "fatal error; disabling device\n"); xenvif_carrier_off(vif); - xenvif_put(vif); } -static int netbk_count_requests(struct xenvif *vif, - struct xen_netif_tx_request *first, - struct xen_netif_tx_request *txp, - int work_to_do) +static int xenvif_count_requests(struct xenvif *vif, + struct xen_netif_tx_request *first, + struct xen_netif_tx_request *txp, + int work_to_do) { RING_IDX cons = vif->tx.req_cons; int slots = 0; @@ -919,7 +761,7 @@ static int netbk_count_requests(struct xenvif *vif, netdev_err(vif->dev, "Asked for %d slots but exceeds this limit\n", work_to_do); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -ENODATA; } @@ -930,7 +772,7 @@ static int netbk_count_requests(struct xenvif *vif, netdev_err(vif->dev, "Malicious frontend using %d slots, threshold %u\n", slots, fatal_skb_slots); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -E2BIG; } @@ -978,7 +820,7 @@ static int netbk_count_requests(struct xenvif *vif, if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) { netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n", txp->offset, txp->size); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -EINVAL; } @@ -990,29 +832,30 @@ static int netbk_count_requests(struct xenvif *vif, } while (more_data); if (drop_err) { - netbk_tx_err(vif, first, cons + slots); + xenvif_tx_err(vif, first, cons + slots); return drop_err; } return slots; } -static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk, - u16 pending_idx) +static struct page *xenvif_alloc_page(struct xenvif *vif, + u16 pending_idx) { struct page *page; + page = alloc_page(GFP_KERNEL|__GFP_COLD); if (!page) return NULL; - netbk->mmap_pages[pending_idx] = page; + vif->mmap_pages[pending_idx] = page; + return page; } -static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, - struct xenvif *vif, - struct sk_buff *skb, - struct xen_netif_tx_request *txp, - struct gnttab_copy *gop) +static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif, + struct sk_buff *skb, + struct xen_netif_tx_request *txp, + struct gnttab_copy *gop) { struct skb_shared_info *shinfo = skb_shinfo(skb); skb_frag_t *frags = shinfo->frags; @@ -1035,12 +878,12 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, /* Coalesce tx requests, at this point the packet passed in * should be <= 64K. Any packets larger than 64K have been - * handled in netbk_count_requests(). + * handled in xenvif_count_requests(). */ for (shinfo->nr_frags = slot = start; slot < nr_slots; shinfo->nr_frags++) { struct pending_tx_info *pending_tx_info - netbk->pending_tx_info; + vif->pending_tx_info; page = alloc_page(GFP_KERNEL|__GFP_COLD); if (!page) @@ -1077,21 +920,18 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, gop->len = txp->size; dst_offset += gop->len; - index = pending_index(netbk->pending_cons++); + index = pending_index(vif->pending_cons++); - pending_idx = netbk->pending_ring[index]; + pending_idx = vif->pending_ring[index]; memcpy(&pending_tx_info[pending_idx].req, txp, sizeof(*txp)); - xenvif_get(vif); - - pending_tx_info[pending_idx].vif = vif; /* Poison these fields, corresponding * fields for head tx req will be set * to correct values after the loop. */ - netbk->mmap_pages[pending_idx] = (void *)(~0UL); + vif->mmap_pages[pending_idx] = (void *)(~0UL); pending_tx_info[pending_idx].head INVALID_PENDING_RING_IDX; @@ -1111,7 +951,7 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, first->req.offset = 0; first->req.size = dst_offset; first->head = start_idx; - netbk->mmap_pages[head_idx] = page; + vif->mmap_pages[head_idx] = page; frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx); } @@ -1121,20 +961,20 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk, err: /* Unwind, freeing all pages and sending error responses. */ while (shinfo->nr_frags-- > start) { - xen_netbk_idx_release(netbk, + xenvif_idx_release(vif, frag_get_pending_idx(&frags[shinfo->nr_frags]), XEN_NETIF_RSP_ERROR); } /* The head too, if necessary. */ if (start) - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR); return NULL; } -static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, - struct sk_buff *skb, - struct gnttab_copy **gopp) +static int xenvif_tx_check_gop(struct xenvif *vif, + struct sk_buff *skb, + struct gnttab_copy **gopp) { struct gnttab_copy *gop = *gopp; u16 pending_idx = *((u16 *)skb->data); @@ -1147,7 +987,7 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, /* Check status of header. */ err = gop->status; if (unlikely(err)) - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR); /* Skip first skb fragment if it is on same page as header fragment. */ start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx); @@ -1157,7 +997,7 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, pending_ring_idx_t head; pending_idx = frag_get_pending_idx(&shinfo->frags[i]); - tx_info = &netbk->pending_tx_info[pending_idx]; + tx_info = &vif->pending_tx_info[pending_idx]; head = tx_info->head; /* Check error status: if okay then remember grant handle. */ @@ -1165,18 +1005,18 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, newerr = (++gop)->status; if (newerr) break; - peek = netbk->pending_ring[pending_index(++head)]; - } while (!pending_tx_is_head(netbk, peek)); + peek = vif->pending_ring[pending_index(++head)]; + } while (!pending_tx_is_head(vif, peek)); if (likely(!newerr)) { /* Had a previous error? Invalidate this fragment. */ if (unlikely(err)) - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); continue; } /* Error on this fragment: respond to client with an error. */ - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR); /* Not the first error? Preceding frags already invalidated. */ if (err) @@ -1184,10 +1024,10 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, /* First error: invalidate header and preceding fragments. */ pending_idx = *((u16 *)skb->data); - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); for (j = start; j < i; j++) { pending_idx = frag_get_pending_idx(&shinfo->frags[j]); - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); } /* Remember the error: invalidate all subsequent fragments. */ @@ -1198,7 +1038,7 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, return err; } -static void xen_netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb) +static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb) { struct skb_shared_info *shinfo = skb_shinfo(skb); int nr_frags = shinfo->nr_frags; @@ -1212,20 +1052,20 @@ static void xen_netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb) pending_idx = frag_get_pending_idx(frag); - txp = &netbk->pending_tx_info[pending_idx].req; - page = virt_to_page(idx_to_kaddr(netbk, pending_idx)); + txp = &vif->pending_tx_info[pending_idx].req; + page = virt_to_page(idx_to_kaddr(vif, pending_idx)); __skb_fill_page_desc(skb, i, page, txp->offset, txp->size); skb->len += txp->size; skb->data_len += txp->size; skb->truesize += txp->size; - /* Take an extra reference to offset xen_netbk_idx_release */ - get_page(netbk->mmap_pages[pending_idx]); - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); + /* Take an extra reference to offset xenvif_idx_release */ + get_page(vif->mmap_pages[pending_idx]); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); } } -static int xen_netbk_get_extras(struct xenvif *vif, +static int xenvif_get_extras(struct xenvif *vif, struct xen_netif_extra_info *extras, int work_to_do) { @@ -1235,7 +1075,7 @@ static int xen_netbk_get_extras(struct xenvif *vif, do { if (unlikely(work_to_do-- <= 0)) { netdev_err(vif->dev, "Missing extra info\n"); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -EBADR; } @@ -1246,7 +1086,7 @@ static int xen_netbk_get_extras(struct xenvif *vif, vif->tx.req_cons = ++cons; netdev_err(vif->dev, "Invalid extra type: %d\n", extra.type); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -EINVAL; } @@ -1257,20 +1097,20 @@ static int xen_netbk_get_extras(struct xenvif *vif, return work_to_do; } -static int netbk_set_skb_gso(struct xenvif *vif, - struct sk_buff *skb, - struct xen_netif_extra_info *gso) +static int xenvif_set_skb_gso(struct xenvif *vif, + struct sk_buff *skb, + struct xen_netif_extra_info *gso) { if (!gso->u.gso.size) { netdev_err(vif->dev, "GSO size must not be zero.\n"); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -EINVAL; } /* Currently only TCPv4 S.O. is supported. */ if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) { netdev_err(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); return -EINVAL; } @@ -1381,17 +1221,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) return false; } -static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, - struct gnttab_copy *tco) +static unsigned xenvif_tx_build_gops(struct xenvif *vif, + struct gnttab_copy *tco) { struct gnttab_copy *gop = tco, *request_gop; struct sk_buff *skb; int ret; - while ((nr_pending_reqs(netbk) + XEN_NETBK_LEGACY_SLOTS_MAX - < MAX_PENDING_REQS) && - !list_empty(&netbk->net_schedule_list)) { - struct xenvif *vif; + while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX + < MAX_PENDING_REQS)) { struct xen_netif_tx_request txreq; struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX]; struct page *page; @@ -1402,16 +1240,6 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, unsigned int data_len; pending_ring_idx_t index; - /* Get a netif from the list with work to do. */ - vif = poll_net_schedule_list(netbk); - /* This can sometimes happen because the test of - * list_empty(net_schedule_list) at the top of the - * loop is unlocked. Just go back and have another - * look. - */ - if (!vif) - continue; - if (vif->tx.sring->req_prod - vif->tx.req_cons > XEN_NETIF_TX_RING_SIZE) { netdev_err(vif->dev, @@ -1419,15 +1247,13 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, "req_prod %d, req_cons %d, size %ld\n", vif->tx.sring->req_prod, vif->tx.req_cons, XEN_NETIF_TX_RING_SIZE); - netbk_fatal_tx_err(vif); + xenvif_fatal_tx_err(vif); continue; } RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do); - if (!work_to_do) { - xenvif_put(vif); - continue; - } + if (!work_to_do) + break; idx = vif->tx.req_cons; rmb(); /* Ensure that we see the request before we copy it. */ @@ -1435,10 +1261,8 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, /* Credit-based scheduling. */ if (txreq.size > vif->remaining_credit && - tx_credit_exceeded(vif, txreq.size)) { - xenvif_put(vif); - continue; - } + tx_credit_exceeded(vif, txreq.size)) + break; vif->remaining_credit -= txreq.size; @@ -1447,24 +1271,24 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, memset(extras, 0, sizeof(extras)); if (txreq.flags & XEN_NETTXF_extra_info) { - work_to_do = xen_netbk_get_extras(vif, extras, + work_to_do = xenvif_get_extras(vif, extras, work_to_do); idx = vif->tx.req_cons; if (unlikely(work_to_do < 0)) - continue; + break; } - ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do); + ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do); if (unlikely(ret < 0)) - continue; + break; idx += ret; if (unlikely(txreq.size < ETH_HLEN)) { netdev_dbg(vif->dev, "Bad packet size: %d\n", txreq.size); - netbk_tx_err(vif, &txreq, idx); - continue; + xenvif_tx_err(vif, &txreq, idx); + break; } /* No crossing a page as the payload mustn''t fragment. */ @@ -1473,12 +1297,12 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, "txreq.offset: %x, size: %u, end: %lu\n", txreq.offset, txreq.size, (txreq.offset&~PAGE_MASK) + txreq.size); - netbk_fatal_tx_err(vif); - continue; + xenvif_fatal_tx_err(vif); + break; } - index = pending_index(netbk->pending_cons); - pending_idx = netbk->pending_ring[index]; + index = pending_index(vif->pending_cons); + pending_idx = vif->pending_ring[index]; data_len = (txreq.size > PKT_PROT_LEN && ret < XEN_NETBK_LEGACY_SLOTS_MAX) ? @@ -1489,7 +1313,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, if (unlikely(skb == NULL)) { netdev_dbg(vif->dev, "Can''t allocate a skb in start_xmit.\n"); - netbk_tx_err(vif, &txreq, idx); + xenvif_tx_err(vif, &txreq, idx); break; } @@ -1500,19 +1324,20 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, struct xen_netif_extra_info *gso; gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1]; - if (netbk_set_skb_gso(vif, skb, gso)) { - /* Failure in netbk_set_skb_gso is fatal. */ + if (xenvif_set_skb_gso(vif, skb, gso)) { + /* Failure in xenvif_set_skb_gso is fatal. */ kfree_skb(skb); - continue; + /* XXX ???? break or continue ?*/ + break; } } /* XXX could copy straight to head */ - page = xen_netbk_alloc_page(netbk, pending_idx); + page = xenvif_alloc_page(vif, pending_idx); if (!page) { kfree_skb(skb); - netbk_tx_err(vif, &txreq, idx); - continue; + xenvif_tx_err(vif, &txreq, idx); + break; } gop->source.u.ref = txreq.gref; @@ -1528,10 +1353,9 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, gop++; - memcpy(&netbk->pending_tx_info[pending_idx].req, + memcpy(&vif->pending_tx_info[pending_idx].req, &txreq, sizeof(txreq)); - netbk->pending_tx_info[pending_idx].vif = vif; - netbk->pending_tx_info[pending_idx].head = index; + vif->pending_tx_info[pending_idx].head = index; *((u16 *)skb->data) = pending_idx; __skb_put(skb, data_len); @@ -1546,21 +1370,19 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, INVALID_PENDING_IDX); } - netbk->pending_cons++; + vif->pending_cons++; - request_gop = xen_netbk_get_requests(netbk, vif, - skb, txfrags, gop); + request_gop = xenvif_get_requests(vif, skb, txfrags, gop); if (request_gop == NULL) { kfree_skb(skb); - netbk_tx_err(vif, &txreq, idx); - continue; + xenvif_tx_err(vif, &txreq, idx); + break; } gop = request_gop; - __skb_queue_tail(&netbk->tx_queue, skb); + __skb_queue_tail(&vif->tx_queue, skb); vif->tx.req_cons = idx; - xen_netbk_check_rx_xenvif(vif); if ((gop-tco) >= TX_COPY_OPS_SIZE) break; @@ -1569,24 +1391,25 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, return gop - tco; } -static void xen_netbk_tx_submit(struct xen_netbk *netbk, - struct gnttab_copy *tco) +static int xenvif_tx_submit(struct xenvif *vif, + struct gnttab_copy *tco, + int budget) { struct gnttab_copy *gop = tco; struct sk_buff *skb; + int work_done = 0; - while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) { + while (work_done < budget && + (skb = __skb_dequeue(&vif->tx_queue)) != NULL) { struct xen_netif_tx_request *txp; - struct xenvif *vif; u16 pending_idx; unsigned data_len; pending_idx = *((u16 *)skb->data); - vif = netbk->pending_tx_info[pending_idx].vif; - txp = &netbk->pending_tx_info[pending_idx].req; + txp = &vif->pending_tx_info[pending_idx].req; /* Check the remap error code. */ - if (unlikely(xen_netbk_tx_check_gop(netbk, skb, &gop))) { + if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) { netdev_dbg(vif->dev, "netback grant failed.\n"); skb_shinfo(skb)->nr_frags = 0; kfree_skb(skb); @@ -1595,7 +1418,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk, data_len = skb->len; memcpy(skb->data, - (void *)(idx_to_kaddr(netbk, pending_idx)|txp->offset), + (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset), data_len); if (data_len < txp->size) { /* Append the packet payload as a fragment. */ @@ -1603,7 +1426,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk, txp->size -= data_len; } else { /* Schedule a response immediately. */ - xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); + xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); } if (txp->flags & XEN_NETTXF_csum_blank) @@ -1611,7 +1434,7 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk, else if (txp->flags & XEN_NETTXF_data_validated) skb->ip_summed = CHECKSUM_UNNECESSARY; - xen_netbk_fill_frags(netbk, skb); + xenvif_fill_frags(vif, skb); /* * If the initial fragment was < PKT_PROT_LEN then @@ -1639,14 +1462,19 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk, vif->dev->stats.rx_bytes += skb->len; vif->dev->stats.rx_packets++; - xenvif_receive_skb(vif, skb); + work_done++; + + netif_receive_skb(skb); } + + return work_done; } /* Called after netfront has transmitted */ -static void xen_netbk_tx_action(struct xen_netbk *netbk) +int xenvif_tx_action(struct xenvif *vif, int budget) { unsigned nr_gops; + int work_done; struct gnttab_copy *tco; static int unusable_count; @@ -1658,56 +1486,62 @@ static void xen_netbk_tx_action(struct xen_netbk *netbk) printk(KERN_ALERT "xen-netback: " "CPU %d scratch space is not available," - " not doing any RX work for netback/%d\n", + " not doing any RX work for vif%d.%d\n", smp_processor_id(), - (int)(netbk - xen_netbk)); + vif->domid, vif->handle); } else unusable_count++; - return; + return 0; + } + + if (unlikely(!tx_work_todo(vif))) { + put_cpu_var(tx_copy_ops); + return 0; } - nr_gops = xen_netbk_tx_build_gops(netbk, tco); + + nr_gops = xenvif_tx_build_gops(vif, tco); if (nr_gops == 0) { put_cpu_var(tx_copy_ops); - return; + return 0; } gnttab_batch_copy(tco, nr_gops); - xen_netbk_tx_submit(netbk, tco); + work_done = xenvif_tx_submit(vif, tco, nr_gops); put_cpu_var(tx_copy_ops); + + return work_done; } -static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, +static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx, u8 status) { - struct xenvif *vif; struct pending_tx_info *pending_tx_info; pending_ring_idx_t head; u16 peek; /* peek into next tx request */ - BUG_ON(netbk->mmap_pages[pending_idx] == (void *)(~0UL)); + BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL)); /* Already complete? */ - if (netbk->mmap_pages[pending_idx] == NULL) + if (vif->mmap_pages[pending_idx] == NULL) return; - pending_tx_info = &netbk->pending_tx_info[pending_idx]; + pending_tx_info = &vif->pending_tx_info[pending_idx]; - vif = pending_tx_info->vif; head = pending_tx_info->head; - BUG_ON(!pending_tx_is_head(netbk, head)); - BUG_ON(netbk->pending_ring[pending_index(head)] != pending_idx); + BUG_ON(!pending_tx_is_head(vif, head)); + BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx); do { pending_ring_idx_t index; pending_ring_idx_t idx = pending_index(head); - u16 info_idx = netbk->pending_ring[idx]; + u16 info_idx = vif->pending_ring[idx]; - pending_tx_info = &netbk->pending_tx_info[info_idx]; + pending_tx_info = &vif->pending_tx_info[info_idx]; make_tx_response(vif, &pending_tx_info->req, status); /* Setting any number other than @@ -1716,18 +1550,15 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, */ pending_tx_info->head = 0; - index = pending_index(netbk->pending_prod++); - netbk->pending_ring[index] = netbk->pending_ring[info_idx]; - - xenvif_put(vif); + index = pending_index(vif->pending_prod++); + vif->pending_ring[index] = vif->pending_ring[info_idx]; - peek = netbk->pending_ring[pending_index(++head)]; + peek = vif->pending_ring[pending_index(++head)]; - } while (!pending_tx_is_head(netbk, peek)); + } while (!pending_tx_is_head(vif, peek)); - netbk->mmap_pages[pending_idx]->mapping = 0; - put_page(netbk->mmap_pages[pending_idx]); - netbk->mmap_pages[pending_idx] = NULL; + put_page(vif->mmap_pages[pending_idx]); + vif->mmap_pages[pending_idx] = NULL; } @@ -1775,45 +1606,22 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, return resp; } -static inline int rx_work_todo(struct xen_netbk *netbk) +static inline int rx_work_todo(struct xenvif *vif) { - return !skb_queue_empty(&netbk->rx_queue); + return !skb_queue_empty(&vif->rx_queue); } -static inline int tx_work_todo(struct xen_netbk *netbk) +static inline int tx_work_todo(struct xenvif *vif) { - if ((nr_pending_reqs(netbk) + XEN_NETBK_LEGACY_SLOTS_MAX - < MAX_PENDING_REQS) && - !list_empty(&netbk->net_schedule_list)) + if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) && + (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX + < MAX_PENDING_REQS)) return 1; return 0; } -static int xen_netbk_kthread(void *data) -{ - struct xen_netbk *netbk = data; - while (!kthread_should_stop()) { - wait_event_interruptible(netbk->wq, - rx_work_todo(netbk) || - tx_work_todo(netbk) || - kthread_should_stop()); - cond_resched(); - - if (kthread_should_stop()) - break; - - if (rx_work_todo(netbk)) - xen_netbk_rx_action(netbk); - - if (tx_work_todo(netbk)) - xen_netbk_tx_action(netbk); - } - - return 0; -} - static int __create_percpu_scratch_space(unsigned int cpu) { if (per_cpu(tx_copy_ops, cpu) || @@ -1830,7 +1638,7 @@ static int __create_percpu_scratch_space(unsigned int cpu) cpu_to_node(cpu)); per_cpu(meta, cpu) - vzalloc_node(sizeof(struct netbk_rx_meta) * META_SIZE, + vzalloc_node(sizeof(struct xenvif_rx_meta) * META_SIZE, cpu_to_node(cpu)); if (!per_cpu(tx_copy_ops, cpu) || @@ -1901,7 +1709,7 @@ static struct notifier_block netback_notifier_block = { .notifier_call = __netback_percpu_callback, }; -void xen_netbk_unmap_frontend_rings(struct xenvif *vif) +void xenvif_unmap_frontend_rings(struct xenvif *vif) { if (vif->tx.sring) xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif), @@ -1911,9 +1719,9 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) vif->rx.sring); } -int xen_netbk_map_frontend_rings(struct xenvif *vif, - grant_ref_t tx_ring_ref, - grant_ref_t rx_ring_ref) +int xenvif_map_frontend_rings(struct xenvif *vif, + grant_ref_t tx_ring_ref, + grant_ref_t rx_ring_ref) { void *addr; struct xen_netif_tx_sring *txs; @@ -1942,15 +1750,33 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, return 0; err: - xen_netbk_unmap_frontend_rings(vif); + xenvif_unmap_frontend_rings(vif); return err; } +int xenvif_kthread(void *data) +{ + struct xenvif *vif = data; + + while (!kthread_should_stop()) { + wait_event_interruptible(vif->wq, + rx_work_todo(vif) || + kthread_should_stop()); + cond_resched(); + + if (kthread_should_stop()) + break; + + if (rx_work_todo(vif)) + xenvif_rx_action(vif); + } + + return 0; +} + static int __init netback_init(void) { - int i; int rc = 0; - int group; int cpu; if (!xen_domain()) @@ -1972,63 +1798,12 @@ static int __init netback_init(void) } register_hotcpu_notifier(&netback_notifier_block); - xen_netbk_group_nr = num_online_cpus(); - xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr); - if (!xen_netbk) { - goto failed_init; - rc = -ENOMEM; - } - - for (group = 0; group < xen_netbk_group_nr; group++) { - struct xen_netbk *netbk = &xen_netbk[group]; - skb_queue_head_init(&netbk->rx_queue); - skb_queue_head_init(&netbk->tx_queue); - - init_timer(&netbk->net_timer); - netbk->net_timer.data = (unsigned long)netbk; - netbk->net_timer.function = xen_netbk_alarm; - - netbk->pending_cons = 0; - netbk->pending_prod = MAX_PENDING_REQS; - for (i = 0; i < MAX_PENDING_REQS; i++) - netbk->pending_ring[i] = i; - - init_waitqueue_head(&netbk->wq); - netbk->task = kthread_create(xen_netbk_kthread, - (void *)netbk, - "netback/%u", group); - - if (IS_ERR(netbk->task)) { - printk(KERN_ALERT "kthread_create() fails at netback\n"); - del_timer(&netbk->net_timer); - rc = PTR_ERR(netbk->task); - goto failed_init_destroy_kthreads; - } - - kthread_bind(netbk->task, group); - - INIT_LIST_HEAD(&netbk->net_schedule_list); - - spin_lock_init(&netbk->net_schedule_list_lock); - - atomic_set(&netbk->netfront_count, 0); - - wake_up_process(netbk->task); - } - rc = xenvif_xenbus_init(); if (rc) - goto failed_init_destroy_kthreads; + goto failed_init; return 0; -failed_init_destroy_kthreads: - while (--group >= 0) { - struct xen_netbk *netbk = &xen_netbk[group]; - del_timer(&netbk->net_timer); - kthread_stop(netbk->task); - } - vfree(xen_netbk); failed_init: for_each_online_cpu(cpu) __free_percpu_scratch_space(cpu); @@ -2040,22 +1815,10 @@ module_init(netback_init); static void __exit netback_fini(void) { - int i, j; + int i; xenvif_xenbus_fini(); - for (i = 0; i < xen_netbk_group_nr; i++) { - struct xen_netbk *netbk = &xen_netbk[i]; - del_timer_sync(&netbk->net_timer); - kthread_stop(netbk->task); - for (j = 0; j < MAX_PENDING_REQS; j++) { - if (netbk->mmap_pages[i]) - __free_page(netbk->mmap_pages[i]); - } - } - - vfree(xen_netbk); - unregister_hotcpu_notifier(&netback_notifier_block); for_each_online_cpu(i) -- 1.7.10.4
David Vrabel
2013-May-28 09:21 UTC
Re: [PATCH 1/3] xen-netback: remove page tracking facility
On 27/05/13 12:29, Wei Liu wrote:> The data flow from DomU to DomU on the same host: > > With tracking facility: > > copy > DomU --------> Dom0 DomU > | ^ > |____________________________| > copy > > In other words, we can always copy page from Dom0, thus removing the > need for a tracking facility. > > copy copy > DomU --------> Dom0 -------> DomU > > Simple iperf test shows no performance regression (obviously we do two > copy''s anyway): > > W/ tracking: ~5.3Gb/s > W/o tracking: ~5.4Gb/s > > Signed-off-by: Wei Liu <wei.liu2@citrix.com> > --- > drivers/net/xen-netback/netback.c | 77 +------------------------------------ > 1 file changed, 2 insertions(+), 75 deletions(-)Nice! This is the sort of patch I like. David
annie li
2013-May-28 09:47 UTC
Re: [PATCH 2/3] xen-netback: switch to per-cpu scratch space
On 2013-5-27 19:29, Wei Liu wrote:> There are maximum nr_onlie_cpus netback threads running.nr_onlie_cpus --> nr_online_cpus> We can make use > of per-cpu scratch space to reduce the size of buffer space when we move > to 1:1 model. > > In the unlikely event when per-cpu scratch space is not available, > processing routines will refuse to run on that CPU. > > Signed-off-by: Wei Liu <wei.liu2@citrix.com> > --- > drivers/net/xen-netback/netback.c | 247 ++++++++++++++++++++++++++++++------- > 1 file changed, 204 insertions(+), 43 deletions(-) > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index 54853be..0f69eda 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -37,6 +37,7 @@ > #include <linux/kthread.h> > #include <linux/if_vlan.h> > #include <linux/udp.h> > +#include <linux/cpu.h> > > #include <net/tcp.h> > > @@ -95,6 +96,24 @@ struct netbk_rx_meta { > > #define MAX_BUFFER_OFFSET PAGE_SIZE > > +/* Coalescing tx requests before copying makes number of grant > + * copy ops greater or equal to number of slots required. In > + * worst case a tx request consumes 2 gnttab_copy. So the size > + * of tx_copy_ops array should be 2*MAX_PENDING_REQS. > + */ > +#define TX_COPY_OPS_SIZE (2*MAX_PENDING_REQS) > +DEFINE_PER_CPU(struct gnttab_copy *, tx_copy_ops); > + > +/* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each > + * head/fragment page uses 2 copy operations because it > + * straddles two buffers in the frontend. So the size of following > + * arrays should be 2*XEN_NETIF_RX_RING_SIZE. > + */ > +#define GRANT_COPY_OP_SIZE (2*XEN_NETIF_RX_RING_SIZE) > +#define META_SIZE (2*XEN_NETIF_RX_RING_SIZE) > +DEFINE_PER_CPU(struct gnttab_copy *, grant_copy_op); > +DEFINE_PER_CPU(struct netbk_rx_meta *, meta); > + > struct xen_netbk { > wait_queue_head_t wq; > struct task_struct *task; > @@ -116,21 +135,7 @@ struct xen_netbk { > atomic_t netfront_count; > > struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; > - /* Coalescing tx requests before copying makes number of grant > - * copy ops greater or equal to number of slots required. In > - * worst case a tx request consumes 2 gnttab_copy. > - */ > - struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS]; > - > u16 pending_ring[MAX_PENDING_REQS]; > - > - /* > - * Given MAX_BUFFER_OFFSET of 4096 the worst case is that each > - * head/fragment page uses 2 copy operations because it > - * straddles two buffers in the frontend. > - */ > - struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE]; > - struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE]; > }; > > static struct xen_netbk *xen_netbk; > @@ -608,12 +613,31 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > int count; > unsigned long offset; > struct skb_cb_overlay *sco; > + struct gnttab_copy *gco = get_cpu_var(grant_copy_op); > + struct netbk_rx_meta *m = get_cpu_var(meta);Change m to a friendly name?> + static int unusable_count; > > struct netrx_pending_operations npo = { > - .copy = netbk->grant_copy_op, > - .meta = netbk->meta, > + .copy = gco, > + .meta = m, > }; > > + if (gco == NULL || m == NULL) { > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > + if (unusable_count == 1000) {It is better to use a macro to replace this number here. BTW, can you explain why using 1000 here?> + printk(KERN_ALERT > + "xen-netback: " > + "CPU %d scratch space is not available," > + " not doing any TX work for netback/%d\n", > + smp_processor_id(), > + (int)(netbk - xen_netbk));unusable_count is not a value based on netbk here. I assume you use unusable_count to judge whether scratch space is available for specific netbk, if so, then unusable_count needs to be counter for specific netbk, not for all netbk.> + unusable_count = 0; > + } else > + unusable_count++; > + return; > + } > + > skb_queue_head_init(&rxq); > > count = 0; > @@ -635,27 +659,30 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > break; > } > > - BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)); > + BUG_ON(npo.meta_prod > META_SIZE); > > - if (!npo.copy_prod) > + if (!npo.copy_prod) { > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > return; > + } > > - BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op)); > - gnttab_batch_copy(netbk->grant_copy_op, npo.copy_prod); > + BUG_ON(npo.copy_prod > GRANT_COPY_OP_SIZE); > + gnttab_batch_copy(gco, npo.copy_prod); > > while ((skb = __skb_dequeue(&rxq)) != NULL) { > sco = (struct skb_cb_overlay *)skb->cb; > > vif = netdev_priv(skb->dev); > > - if (netbk->meta[npo.meta_cons].gso_size && vif->gso_prefix) { > + if (m[npo.meta_cons].gso_size && vif->gso_prefix) { > resp = RING_GET_RESPONSE(&vif->rx, > vif->rx.rsp_prod_pvt++); > > resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data; > > - resp->offset = netbk->meta[npo.meta_cons].gso_size; > - resp->id = netbk->meta[npo.meta_cons].id; > + resp->offset = m[npo.meta_cons].gso_size; > + resp->id = m[npo.meta_cons].id; > resp->status = sco->meta_slots_used; > > npo.meta_cons++; > @@ -680,12 +707,12 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > flags |= XEN_NETRXF_data_validated; > > offset = 0; > - resp = make_rx_response(vif, netbk->meta[npo.meta_cons].id, > + resp = make_rx_response(vif, m[npo.meta_cons].id, > status, offset, > - netbk->meta[npo.meta_cons].size, > + m[npo.meta_cons].size, > flags); > > - if (netbk->meta[npo.meta_cons].gso_size && !vif->gso_prefix) { > + if (m[npo.meta_cons].gso_size && !vif->gso_prefix) { > struct xen_netif_extra_info *gso > (struct xen_netif_extra_info *) > RING_GET_RESPONSE(&vif->rx, > @@ -693,7 +720,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > > resp->flags |= XEN_NETRXF_extra_info; > > - gso->u.gso.size = netbk->meta[npo.meta_cons].gso_size; > + gso->u.gso.size = m[npo.meta_cons].gso_size; > gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4; > gso->u.gso.pad = 0; > gso->u.gso.features = 0; > @@ -703,7 +730,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > } > > netbk_add_frag_responses(vif, status, > - netbk->meta + npo.meta_cons + 1, > + m + npo.meta_cons + 1, > sco->meta_slots_used); > > RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); > @@ -726,6 +753,9 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > if (!skb_queue_empty(&netbk->rx_queue) && > !timer_pending(&netbk->net_timer)) > xen_netbk_kick_thread(netbk); > + > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > } > > void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb) > @@ -1351,9 +1381,10 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) > return false; > } > > -static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) > +static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, > + struct gnttab_copy *tco) > { > - struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop; > + struct gnttab_copy *gop = tco, *request_gop; > struct sk_buff *skb; > int ret; > > @@ -1531,16 +1562,17 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) > vif->tx.req_cons = idx; > xen_netbk_check_rx_xenvif(vif); > > - if ((gop-netbk->tx_copy_ops) >= ARRAY_SIZE(netbk->tx_copy_ops)) > + if ((gop-tco) >= TX_COPY_OPS_SIZE) > break; > } > > - return gop - netbk->tx_copy_ops; > + return gop - tco; > } > > -static void xen_netbk_tx_submit(struct xen_netbk *netbk) > +static void xen_netbk_tx_submit(struct xen_netbk *netbk, > + struct gnttab_copy *tco) > { > - struct gnttab_copy *gop = netbk->tx_copy_ops; > + struct gnttab_copy *gop = tco; > struct sk_buff *skb; > > while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) { > @@ -1615,15 +1647,37 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk) > static void xen_netbk_tx_action(struct xen_netbk *netbk) > { > unsigned nr_gops; > + struct gnttab_copy *tco; > + static int unusable_count; > + > + tco = get_cpu_var(tx_copy_ops); > + > + if (tco == NULL) { > + put_cpu_var(tx_copy_ops); > + if (unusable_count == 1000) {Same as above> + printk(KERN_ALERT > + "xen-netback: " > + "CPU %d scratch space is not available," > + " not doing any RX work for netback/%d\n", > + smp_processor_id(), > + (int)(netbk - xen_netbk)); > + } else > + unusable_count++; > + return; > + } > > - nr_gops = xen_netbk_tx_build_gops(netbk); > + nr_gops = xen_netbk_tx_build_gops(netbk, tco); > > - if (nr_gops == 0) > + if (nr_gops == 0) { > + put_cpu_var(tx_copy_ops); > return; > + } > + > + gnttab_batch_copy(tco, nr_gops); > > - gnttab_batch_copy(netbk->tx_copy_ops, nr_gops); > + xen_netbk_tx_submit(netbk, tco); > > - xen_netbk_tx_submit(netbk); > + put_cpu_var(tx_copy_ops); > } > > static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, > @@ -1760,6 +1814,93 @@ static int xen_netbk_kthread(void *data) > return 0; > } > > +static int __create_percpu_scratch_space(unsigned int cpu) > +{ > + if (per_cpu(tx_copy_ops, cpu) || > + per_cpu(grant_copy_op, cpu) || > + per_cpu(meta, cpu)) > + return 0; > + > + per_cpu(tx_copy_ops, cpu) > + vzalloc_node(sizeof(struct gnttab_copy) * TX_COPY_OPS_SIZE, > + cpu_to_node(cpu)); > + > + per_cpu(grant_copy_op, cpu) > + vzalloc_node(sizeof(struct gnttab_copy) * GRANT_COPY_OP_SIZE, > + cpu_to_node(cpu)); > + > + per_cpu(meta, cpu) > + vzalloc_node(sizeof(struct netbk_rx_meta) * META_SIZE, > + cpu_to_node(cpu)); > + > + if (!per_cpu(tx_copy_ops, cpu) || > + !per_cpu(grant_copy_op, cpu) || > + !per_cpu(meta, cpu)) > + return -ENOMEM; > + > + return 0; > +} > + > +static void __free_percpu_scratch_space(unsigned int cpu) > +{ > + void *tmp; > + > + tmp = per_cpu(tx_copy_ops, cpu);It is better to verify whether tmp is available before freeing it, for example: if (tmp)> + per_cpu(tx_copy_ops, cpu) = NULL; > + vfree(tmp); > + > + tmp = per_cpu(grant_copy_op, cpu);same> + per_cpu(grant_copy_op, cpu) = NULL; > + vfree(tmp); > + > + tmp = per_cpu(meta, cpu);same> + per_cpu(meta, cpu) = NULL; > + vfree(tmp); > +} > + > +static int __netback_percpu_callback(struct notifier_block *nfb, > + unsigned long action, void *hcpu) > +{ > + unsigned int cpu = (unsigned long)hcpu; > + int rc = NOTIFY_DONE; > + > + switch (action) { > + case CPU_ONLINE: > + case CPU_ONLINE_FROZEN: > + printk(KERN_INFO "xen-netback: CPU %d online, creating scratch space\n", > + cpu); > + rc = __create_percpu_scratch_space(cpu); > + if (rc) { > + printk(KERN_ALERT "xen-netback: failed to create scratch space for CPU %d\n", > + cpu); > + /* There is really nothing more we can do. Free any > + * partially allocated scratch space. When processing > + * routines get to run they will just print warning > + * message and stop processing. > + */ > + __free_percpu_scratch_space(cpu); > + rc = NOTIFY_BAD; > + } else > + rc = NOTIFY_OK; > + break; > + case CPU_DEAD: > + case CPU_DEAD_FROZEN: > + printk(KERN_INFO "xen-netback: CPU %d offline, destroying scratch space\n", > + cpu); > + __free_percpu_scratch_space(cpu); > + rc = NOTIFY_OK; > + break; > + default: > + break; > + } > + > + return rc; > +} > + > +static struct notifier_block netback_notifier_block = { > + .notifier_call = __netback_percpu_callback, > +};Moving this to the top of this file?> + > void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > { > if (vif->tx.sring) > @@ -1810,6 +1951,7 @@ static int __init netback_init(void) > int i; > int rc = 0; > int group; > + int cpu; > > if (!xen_domain()) > return -ENODEV; > @@ -1821,10 +1963,21 @@ static int __init netback_init(void) > fatal_skb_slots = XEN_NETBK_LEGACY_SLOTS_MAX; > } > > + for_each_online_cpu(cpu) { > + rc = __create_percpu_scratch_space(cpu); > + if (rc) { > + rc = -ENOMEM; > + goto failed_init; > + } > + } > + register_hotcpu_notifier(&netback_notifier_block); > + > xen_netbk_group_nr = num_online_cpus(); > xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr); > - if (!xen_netbk) > - return -ENOMEM; > + if (!xen_netbk) { > + goto failed_init; > + rc = -ENOMEM;rc = -ENOMEM is never called. Thanks Annie
Wei Liu
2013-May-28 10:17 UTC
Re: [PATCH 2/3] xen-netback: switch to per-cpu scratch space
On Tue, May 28, 2013 at 05:47:25PM +0800, annie li wrote: [...]> >- struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE]; > >- struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE]; > > }; > > static struct xen_netbk *xen_netbk; > >@@ -608,12 +613,31 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > > int count; > > unsigned long offset; > > struct skb_cb_overlay *sco; > >+ struct gnttab_copy *gco = get_cpu_var(grant_copy_op); > >+ struct netbk_rx_meta *m = get_cpu_var(meta); > > Change m to a friendly name?Changed to "meta".> > >+ static int unusable_count; > > struct netrx_pending_operations npo = { > >- .copy = netbk->grant_copy_op, > >- .meta = netbk->meta, > >+ .copy = gco, > >+ .meta = m, > > }; > >+ if (gco == NULL || m == NULL) { > >+ put_cpu_var(grant_copy_op); > >+ put_cpu_var(meta); > >+ if (unusable_count == 1000) { > > It is better to use a macro to replace this number here. > BTW, can you explain why using 1000 here? >This is just a random number chosen to avoid flooding dmesg log. :-) Re macro, this is the only place that use this test, so no need for a macro. Following test in TX path look similar, but the put_cpu_var() in that branch is different, I doubt we can get much from defining a macro here.> >+ printk(KERN_ALERT > >+ "xen-netback: " > >+ "CPU %d scratch space is not available," > >+ " not doing any TX work for netback/%d\n", > >+ smp_processor_id(), > >+ (int)(netbk - xen_netbk)); > > unusable_count is not a value based on netbk here. I assume you use > unusable_count to judge whether scratch space is available for > specific netbk, if so, then unusable_count needs to be counter for > specific netbk, not for all netbk. >No, it is not based on netbk. It is for a particular CPU. Per-cpu scratch is for CPUs not netbks. Netback thread can always be scheduled on other CPU in 1:1 model, so in practice it will almost never print out this warning if unusable_count is for netbk. [...]> >+ > >+ if (unusable_count == 1000) { > > Same as above >[...]> >+ return 0; > >+} > >+ > >+static void __free_percpu_scratch_space(unsigned int cpu) > >+{ > >+ void *tmp; > >+ > >+ tmp = per_cpu(tx_copy_ops, cpu); > > It is better to verify whether tmp is available before freeing it, > for example: if (tmp) >No need to do this, as it is legit to free() a NULL pointer.>[...]> >+ > >+static struct notifier_block netback_notifier_block = { > >+ .notifier_call = __netback_percpu_callback, > >+}; > > Moving this to the top of this file? >It is sort of convention to put this sort of things in the back rather in the front.> >+[...] Wei.> > Thanks > Annie >
Konrad Rzeszutek Wilk
2013-May-28 13:18 UTC
Re: [PATCH 2/3] xen-netback: switch to per-cpu scratch space
On Mon, May 27, 2013 at 12:29:42PM +0100, Wei Liu wrote:> There are maximum nr_onlie_cpus netback threads running. We can make use > of per-cpu scratch space to reduce the size of buffer space when we move > to 1:1 model. > > In the unlikely event when per-cpu scratch space is not available, > processing routines will refuse to run on that CPU. > > Signed-off-by: Wei Liu <wei.liu2@citrix.com> > --- > drivers/net/xen-netback/netback.c | 247 ++++++++++++++++++++++++++++++------- > 1 file changed, 204 insertions(+), 43 deletions(-) > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index 54853be..0f69eda 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -37,6 +37,7 @@ > #include <linux/kthread.h> > #include <linux/if_vlan.h> > #include <linux/udp.h> > +#include <linux/cpu.h> > > #include <net/tcp.h> > > @@ -95,6 +96,24 @@ struct netbk_rx_meta { > > #define MAX_BUFFER_OFFSET PAGE_SIZE > > +/* Coalescing tx requests before copying makes number of grant > + * copy ops greater or equal to number of slots required. In > + * worst case a tx request consumes 2 gnttab_copy. So the size > + * of tx_copy_ops array should be 2*MAX_PENDING_REQS. > + */ > +#define TX_COPY_OPS_SIZE (2*MAX_PENDING_REQS) > +DEFINE_PER_CPU(struct gnttab_copy *, tx_copy_ops);static> + > +/* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each > + * head/fragment page uses 2 copy operations because it > + * straddles two buffers in the frontend. So the size of following > + * arrays should be 2*XEN_NETIF_RX_RING_SIZE. > + */ > +#define GRANT_COPY_OP_SIZE (2*XEN_NETIF_RX_RING_SIZE) > +#define META_SIZE (2*XEN_NETIF_RX_RING_SIZE) > +DEFINE_PER_CPU(struct gnttab_copy *, grant_copy_op); > +DEFINE_PER_CPU(struct netbk_rx_meta *, meta);static for both of them.> + > struct xen_netbk { > wait_queue_head_t wq; > struct task_struct *task; > @@ -116,21 +135,7 @@ struct xen_netbk { > atomic_t netfront_count; > > struct pending_tx_info pending_tx_info[MAX_PENDING_REQS]; > - /* Coalescing tx requests before copying makes number of grant > - * copy ops greater or equal to number of slots required. In > - * worst case a tx request consumes 2 gnttab_copy. > - */ > - struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS]; > - > u16 pending_ring[MAX_PENDING_REQS]; > - > - /* > - * Given MAX_BUFFER_OFFSET of 4096 the worst case is that each > - * head/fragment page uses 2 copy operations because it > - * straddles two buffers in the frontend. > - */ > - struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE]; > - struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE]; > }; > > static struct xen_netbk *xen_netbk; > @@ -608,12 +613,31 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > int count; > unsigned long offset; > struct skb_cb_overlay *sco; > + struct gnttab_copy *gco = get_cpu_var(grant_copy_op); > + struct netbk_rx_meta *m = get_cpu_var(meta); > + static int unusable_count; > > struct netrx_pending_operations npo = { > - .copy = netbk->grant_copy_op, > - .meta = netbk->meta, > + .copy = gco, > + .meta = m, > }; > > + if (gco == NULL || m == NULL) { > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > + if (unusable_count == 1000) {printk_ratelimited ?> + printk(KERN_ALERT > + "xen-netback: " > + "CPU %d scratch space is not available," > + " not doing any TX work for netback/%d\n", > + smp_processor_id(), > + (int)(netbk - xen_netbk));So ... are you going to retry it? Drop it? Can you include in the message the the mechanism by which you are going to recover?> + unusable_count = 0; > + } else > + unusable_count++; > + return; > + } > + > skb_queue_head_init(&rxq); > > count = 0; > @@ -635,27 +659,30 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > break; > } > > - BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)); > + BUG_ON(npo.meta_prod > META_SIZE); > > - if (!npo.copy_prod) > + if (!npo.copy_prod) { > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > return; > + } > > - BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op)); > - gnttab_batch_copy(netbk->grant_copy_op, npo.copy_prod); > + BUG_ON(npo.copy_prod > GRANT_COPY_OP_SIZE); > + gnttab_batch_copy(gco, npo.copy_prod); > > while ((skb = __skb_dequeue(&rxq)) != NULL) { > sco = (struct skb_cb_overlay *)skb->cb; > > vif = netdev_priv(skb->dev); > > - if (netbk->meta[npo.meta_cons].gso_size && vif->gso_prefix) { > + if (m[npo.meta_cons].gso_size && vif->gso_prefix) { > resp = RING_GET_RESPONSE(&vif->rx, > vif->rx.rsp_prod_pvt++); > > resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data; > > - resp->offset = netbk->meta[npo.meta_cons].gso_size; > - resp->id = netbk->meta[npo.meta_cons].id; > + resp->offset = m[npo.meta_cons].gso_size; > + resp->id = m[npo.meta_cons].id; > resp->status = sco->meta_slots_used; > > npo.meta_cons++; > @@ -680,12 +707,12 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > flags |= XEN_NETRXF_data_validated; > > offset = 0; > - resp = make_rx_response(vif, netbk->meta[npo.meta_cons].id, > + resp = make_rx_response(vif, m[npo.meta_cons].id, > status, offset, > - netbk->meta[npo.meta_cons].size, > + m[npo.meta_cons].size, > flags); > > - if (netbk->meta[npo.meta_cons].gso_size && !vif->gso_prefix) { > + if (m[npo.meta_cons].gso_size && !vif->gso_prefix) { > struct xen_netif_extra_info *gso > (struct xen_netif_extra_info *) > RING_GET_RESPONSE(&vif->rx, > @@ -693,7 +720,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > > resp->flags |= XEN_NETRXF_extra_info; > > - gso->u.gso.size = netbk->meta[npo.meta_cons].gso_size; > + gso->u.gso.size = m[npo.meta_cons].gso_size; > gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4; > gso->u.gso.pad = 0; > gso->u.gso.features = 0; > @@ -703,7 +730,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > } > > netbk_add_frag_responses(vif, status, > - netbk->meta + npo.meta_cons + 1, > + m + npo.meta_cons + 1, > sco->meta_slots_used); > > RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); > @@ -726,6 +753,9 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk) > if (!skb_queue_empty(&netbk->rx_queue) && > !timer_pending(&netbk->net_timer)) > xen_netbk_kick_thread(netbk); > + > + put_cpu_var(grant_copy_op); > + put_cpu_var(meta); > } > > void xen_netbk_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb) > @@ -1351,9 +1381,10 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) > return false; > } > > -static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) > +static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk, > + struct gnttab_copy *tco) > { > - struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop; > + struct gnttab_copy *gop = tco, *request_gop; > struct sk_buff *skb; > int ret; > > @@ -1531,16 +1562,17 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk) > vif->tx.req_cons = idx; > xen_netbk_check_rx_xenvif(vif); > > - if ((gop-netbk->tx_copy_ops) >= ARRAY_SIZE(netbk->tx_copy_ops)) > + if ((gop-tco) >= TX_COPY_OPS_SIZE) > break; > } > > - return gop - netbk->tx_copy_ops; > + return gop - tco; > } > > -static void xen_netbk_tx_submit(struct xen_netbk *netbk) > +static void xen_netbk_tx_submit(struct xen_netbk *netbk, > + struct gnttab_copy *tco) > { > - struct gnttab_copy *gop = netbk->tx_copy_ops; > + struct gnttab_copy *gop = tco; > struct sk_buff *skb; > > while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) { > @@ -1615,15 +1647,37 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk) > static void xen_netbk_tx_action(struct xen_netbk *netbk) > { > unsigned nr_gops; > + struct gnttab_copy *tco; > + static int unusable_count; > + > + tco = get_cpu_var(tx_copy_ops); > + > + if (tco == NULL) { > + put_cpu_var(tx_copy_ops); > + if (unusable_count == 1000) { > + printk(KERN_ALERTDitto. printk_ratelimited.> + "xen-netback: " > + "CPU %d scratch space is not available," > + " not doing any RX work for netback/%d\n", > + smp_processor_id(), > + (int)(netbk - xen_netbk));And can you explain what the recovery mechanism is?> + } else > + unusable_count++; > + return; > + } > > - nr_gops = xen_netbk_tx_build_gops(netbk); > + nr_gops = xen_netbk_tx_build_gops(netbk, tco); > > - if (nr_gops == 0) > + if (nr_gops == 0) { > + put_cpu_var(tx_copy_ops); > return; > + } > + > + gnttab_batch_copy(tco, nr_gops); > > - gnttab_batch_copy(netbk->tx_copy_ops, nr_gops); > + xen_netbk_tx_submit(netbk, tco); > > - xen_netbk_tx_submit(netbk); > + put_cpu_var(tx_copy_ops); > } > > static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, > @@ -1760,6 +1814,93 @@ static int xen_netbk_kthread(void *data) > return 0; > } > > +static int __create_percpu_scratch_space(unsigned int cpu) > +{ > + if (per_cpu(tx_copy_ops, cpu) || > + per_cpu(grant_copy_op, cpu) || > + per_cpu(meta, cpu)) > + return 0; > + > + per_cpu(tx_copy_ops, cpu) > + vzalloc_node(sizeof(struct gnttab_copy) * TX_COPY_OPS_SIZE, > + cpu_to_node(cpu)); > + > + per_cpu(grant_copy_op, cpu) > + vzalloc_node(sizeof(struct gnttab_copy) * GRANT_COPY_OP_SIZE, > + cpu_to_node(cpu)); > + > + per_cpu(meta, cpu) > + vzalloc_node(sizeof(struct netbk_rx_meta) * META_SIZE, > + cpu_to_node(cpu)); > + > + if (!per_cpu(tx_copy_ops, cpu) || > + !per_cpu(grant_copy_op, cpu) || > + !per_cpu(meta, cpu)) > + return -ENOMEM;And no freeing? Ah you require the __free_percpu_scratch_space to do the job for you. Um, why not do it here instead of depending on the calleer to clean up the mess? Say: { __free_percpu_scratch_space(cpu); return -ENOMEM; } ?> + > + return 0; > +} > + > +static void __free_percpu_scratch_space(unsigned( int cpu) > +{ > + void *tmp; > + > + tmp = per_cpu(tx_copy_ops, cpu); > + per_cpu(tx_copy_ops, cpu) = NULL; > + vfree(tmp); > + > + tmp = per_cpu(grant_copy_op, cpu); > + per_cpu(grant_copy_op, cpu) = NULL; > + vfree(tmp); > + > + tmp = per_cpu(meta, cpu); > + per_cpu(meta, cpu) = NULL; > + vfree(tmp); > +} > + > +static int __netback_percpu_callback(struct notifier_block *nfb, > + unsigned long action, void *hcpu) > +{ > + unsigned int cpu = (unsigned long)hcpu; > + int rc = NOTIFY_DONE; > + > + switch (action) { > + case CPU_ONLINE: > + case CPU_ONLINE_FROZEN: > + printk(KERN_INFO "xen-netback: CPU %d online, creating scratch space\n", > + cpu);I think this is more of pr_debug type.> + rc = __create_percpu_scratch_space(cpu); > + if (rc) { > + printk(KERN_ALERT "xen-netback: failed to create scratch space for CPU %d\n", > + cpu); > + /* There is really nothing more we can do. Free any > + * partially allocated scratch space. When processing > + * routines get to run they will just print warning > + * message and stop processing. > + */ > + __free_percpu_scratch_space(cpu);Ugh. Could the code skip creating a kthread on a CPU for which the per_cpu(meta, cpu) == NULL?> + rc = NOTIFY_BAD; > + } else > + rc = NOTIFY_OK; > + break; > + case CPU_DEAD: > + case CPU_DEAD_FROZEN: > + printk(KERN_INFO "xen-netback: CPU %d offline, destroying scratch space\n", > + cpu); > + __free_percpu_scratch_space(cpu); > + rc = NOTIFY_OK; > + break; > + default: > + break; > + } > + > + return rc; > +} > + > +static struct notifier_block netback_notifier_block = { > + .notifier_call = __netback_percpu_callback, > +}; > + > void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > { > if (vif->tx.sring) > @@ -1810,6 +1951,7 @@ static int __init netback_init(void) > int i; > int rc = 0; > int group; > + int cpu; > > if (!xen_domain()) > return -ENODEV; > @@ -1821,10 +1963,21 @@ static int __init netback_init(void) > fatal_skb_slots = XEN_NETBK_LEGACY_SLOTS_MAX; > } > > + for_each_online_cpu(cpu) { > + rc = __create_percpu_scratch_space(cpu); > + if (rc) { > + rc = -ENOMEM; > + goto failed_init; > + } > + } > + register_hotcpu_notifier(&netback_notifier_block); > + > xen_netbk_group_nr = num_online_cpus(); > xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr); > - if (!xen_netbk) > - return -ENOMEM; > + if (!xen_netbk) { > + goto failed_init; > + rc = -ENOMEM; > + } > > for (group = 0; group < xen_netbk_group_nr; group++) { > struct xen_netbk *netbk = &xen_netbk[group]; > @@ -1849,7 +2002,7 @@ static int __init netback_init(void) > printk(KERN_ALERT "kthread_create() fails at netback\n"); > del_timer(&netbk->net_timer); > rc = PTR_ERR(netbk->task); > - goto failed_init; > + goto failed_init_destroy_kthreads; > } > > kthread_bind(netbk->task, group); > @@ -1865,17 +2018,20 @@ static int __init netback_init(void) > > rc = xenvif_xenbus_init(); > if (rc) > - goto failed_init; > + goto failed_init_destroy_kthreads; > > return 0; > > -failed_init: > +failed_init_destroy_kthreads: > while (--group >= 0) { > struct xen_netbk *netbk = &xen_netbk[group]; > del_timer(&netbk->net_timer); > kthread_stop(netbk->task); > } > vfree(xen_netbk); > +failed_init: > + for_each_online_cpu(cpu) > + __free_percpu_scratch_space(cpu); > return rc; > > } > @@ -1899,6 +2055,11 @@ static void __exit netback_fini(void) > } > > vfree(xen_netbk); > + > + unregister_hotcpu_notifier(&netback_notifier_block); > + > + for_each_online_cpu(i) > + __free_percpu_scratch_space(i); > } > module_exit(netback_fini); > > -- > 1.7.10.4 >
David Vrabel
2013-May-28 13:36 UTC
Re: [PATCH 2/3] xen-netback: switch to per-cpu scratch space
On 28/05/13 14:18, Konrad Rzeszutek Wilk wrote:> On Mon, May 27, 2013 at 12:29:42PM +0100, Wei Liu wrote: >> There are maximum nr_onlie_cpus netback threads running. We can make use >> of per-cpu scratch space to reduce the size of buffer space when we move >> to 1:1 model. >> >> In the unlikely event when per-cpu scratch space is not available, >> processing routines will refuse to run on that CPU.[...]>> --- a/drivers/net/xen-netback/netback.c >> +++ b/drivers/net/xen-netback/netback.c[...]>> + printk(KERN_ALERT >> + "xen-netback: " >> + "CPU %d scratch space is not available," >> + " not doing any TX work for netback/%d\n", >> + smp_processor_id(), >> + (int)(netbk - xen_netbk)); > > So ... are you going to retry it? Drop it? Can you include in the message the > the mechanism by which you are going to recover? >[...]>> + "xen-netback: " >> + "CPU %d scratch space is not available," >> + " not doing any RX work for netback/%d\n", >> + smp_processor_id(), >> + (int)(netbk - xen_netbk)); > > And can you explain what the recovery mechanism is?There isn''t any recovery mechanism at the moment. If the scratch space was not allocated then any netback thread may end up being unable to do any work indefinitely (if the scheduler repeatedly schedules them on the VCPU with no scratch space). This is an appalling failure mode. I also don''t think there is a sensible way to recover. We do not want hotplugging of a VCPU to break or degrade the behaviour of existing VIFs. The meta data is 12 * 512 = 6144 and the grant table ops is 24 * 512 12288. This works out to 6 pages total. I think we can spare 6 pages per VIF and just have per-thread scratch space. You may also want to consider a smaller batch size instead of allowing for 2x ring size. How often do you need this many entries? David
David Vrabel
2013-May-28 13:37 UTC
Re: [PATCH 3/3] xen-netback: switch to NAPI + kthread 1:1 model
On 27/05/13 12:29, Wei Liu wrote:> This patch implements 1:1 model netback. NAPI and kthread are utilized > to do the weight-lifting job: > > - NAPI is used for guest side TX (host side RX) > - kthread is used for guest side RX (host side TX)Should this be split into two patches? David
Wei Liu
2013-May-28 13:40 UTC
Re: [PATCH 3/3] xen-netback: switch to NAPI + kthread 1:1 model
On Tue, May 28, 2013 at 02:37:50PM +0100, David Vrabel wrote:> On 27/05/13 12:29, Wei Liu wrote: > > This patch implements 1:1 model netback. NAPI and kthread are utilized > > to do the weight-lifting job: > > > > - NAPI is used for guest side TX (host side RX) > > - kthread is used for guest side RX (host side TX) > > Should this be split into two patches? >In fact the original model uses kthread for both TX and RX. This patch just splits some functionality to NAPI. So I don''t think this needs to be two patches. Wei.> David
Wei Liu
2013-May-28 13:54 UTC
Re: [PATCH 2/3] xen-netback: switch to per-cpu scratch space
On Tue, May 28, 2013 at 02:36:55PM +0100, David Vrabel wrote:> On 28/05/13 14:18, Konrad Rzeszutek Wilk wrote: > > On Mon, May 27, 2013 at 12:29:42PM +0100, Wei Liu wrote: > >> There are maximum nr_onlie_cpus netback threads running. We can make use > >> of per-cpu scratch space to reduce the size of buffer space when we move > >> to 1:1 model. > >> > >> In the unlikely event when per-cpu scratch space is not available, > >> processing routines will refuse to run on that CPU. > [...] > >> --- a/drivers/net/xen-netback/netback.c > >> +++ b/drivers/net/xen-netback/netback.c > [...] > >> + printk(KERN_ALERT > >> + "xen-netback: " > >> + "CPU %d scratch space is not available," > >> + " not doing any TX work for netback/%d\n", > >> + smp_processor_id(), > >> + (int)(netbk - xen_netbk)); > > > > So ... are you going to retry it? Drop it? Can you include in the message the > > the mechanism by which you are going to recover? > > > [...] > >> + "xen-netback: " > >> + "CPU %d scratch space is not available," > >> + " not doing any RX work for netback/%d\n", > >> + smp_processor_id(), > >> + (int)(netbk - xen_netbk)); > > > > And can you explain what the recovery mechanism is? > > There isn''t any recovery mechanism at the moment. If the scratch space > was not allocated then any netback thread may end up being unable to do > any work indefinitely (if the scheduler repeatedly schedules them on the > VCPU with no scratch space). > > This is an appalling failure mode. >This looks appalling at first glance but I doubt that people would pick this patch without picking the later one. With later patch vifs can be scheduled on different CPUs so that it always gets a chance to work. This patch is proposed before that one to reduce meaningless code movement.> I also don''t think there is a sensible way to recover. We do not want > hotplugging of a VCPU to break or degrade the behaviour of existing VIFs. > > The meta data is 12 * 512 = 6144 and the grant table ops is 24 * 512 > 12288. This works out to 6 pages total. I think we can spare 6 pages > per VIF and just have per-thread scratch space. >Sure, we can always worry about shrinking space usage later. :-) I don''t really mind using extra space. I only want a new working baseline.> You may also want to consider a smaller batch size instead of allowing > for 2x ring size. How often do you need this many entries?Not often, but we ought to prepare for the worst case, right? Wei.> > David
annie li
2013-May-28 14:35 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On 2013-5-27 19:29, Wei Liu wrote:> * This is a xen-devel only post, since we have not reached concesus on > what to add / remove in this new model. This series tries to be > conservative about adding in new feature compared to V1. > > This series implements NAPI + kthread 1:1 model for Xen netback. > > This model > - provides better scheduling fairness among vifs > - is prerequisite for implementing multiqueue for Xen network driver > > The first two patches are ground work for the third patch. First one > simplifies code in netback, second one can reduce memory footprint if we > switch to 1:1 model. > > The third patch has the real meat: > - make use of NAPI to mitigate interrupt > - kthreads are not bound to CPUs any more, so that we can take > advantage of backend scheduler and trust it to do the right thing > > Change since V1: > - No page pool in this version. Instead page tracking facility is > removed.What is your thought about page pool in V1? will you re-post it later on? Thanks Annie> > Wei Liu (3): > xen-netback: remove page tracking facility > xen-netback: switch to per-cpu scratch space > xen-netback: switch to NAPI + kthread 1:1 model > > drivers/net/xen-netback/common.h | 92 ++-- > drivers/net/xen-netback/interface.c | 122 +++-- > drivers/net/xen-netback/netback.c | 959 +++++++++++++++-------------------- > 3 files changed, 537 insertions(+), 636 deletions(-) >
Wei Liu
2013-May-28 15:09 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On Tue, May 28, 2013 at 10:35:43PM +0800, annie li wrote:> > On 2013-5-27 19:29, Wei Liu wrote: > >* This is a xen-devel only post, since we have not reached concesus on > > what to add / remove in this new model. This series tries to be > > conservative about adding in new feature compared to V1. > > > >This series implements NAPI + kthread 1:1 model for Xen netback. > > > >This model > > - provides better scheduling fairness among vifs > > - is prerequisite for implementing multiqueue for Xen network driver > > > >The first two patches are ground work for the third patch. First one > >simplifies code in netback, second one can reduce memory footprint if we > >switch to 1:1 model. > > > >The third patch has the real meat: > > - make use of NAPI to mitigate interrupt > > - kthreads are not bound to CPUs any more, so that we can take > > advantage of backend scheduler and trust it to do the right thing > > > >Change since V1: > > - No page pool in this version. Instead page tracking facility is > > removed. > > What is your thought about page pool in V1? will you re-post it later on? >That would be necessary if we introduce mapping in the future. It''s sort of redundant at the moment with the copying scheme. Wei.
Matt Wilson
2013-May-29 01:43 UTC
Re: [PATCH 1/3] xen-netback: remove page tracking facility
On Tue, May 28, 2013 at 10:21:32AM +0100, David Vrabel wrote:> On 27/05/13 12:29, Wei Liu wrote:[...]> > Simple iperf test shows no performance regression (obviously we do two > > copy''s anyway): > > > > W/ tracking: ~5.3Gb/s > > W/o tracking: ~5.4Gb/s > > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com> > > --- > > drivers/net/xen-netback/netback.c | 77 +------------------------------------ > > 1 file changed, 2 insertions(+), 75 deletions(-) > > Nice! This is the sort of patch I like.Me too. :-) Was there any change in CPU utilization? --msw
On Tue, May 28, 2013 at 06:43:47PM -0700, Matt Wilson wrote:> On Tue, May 28, 2013 at 10:21:32AM +0100, David Vrabel wrote: > > On 27/05/13 12:29, Wei Liu wrote: > > [...] > > > > Simple iperf test shows no performance regression (obviously we do two > > > copy''s anyway): > > > > > > W/ tracking: ~5.3Gb/s > > > W/o tracking: ~5.4Gb/s > > > > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com> > > > --- > > > drivers/net/xen-netback/netback.c | 77 +------------------------------------ > > > 1 file changed, 2 insertions(+), 75 deletions(-) > > > > Nice! This is the sort of patch I like. > > Me too. :-) Was there any change in CPU utilization? >Didn''t measure. But there should not be any change with this patch AFAICT. Wei.> --msw
David Vrabel
2013-Jun-11 10:06 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On 27/05/13 12:29, Wei Liu wrote:> * This is a xen-devel only post, since we have not reached concesus on > what to add / remove in this new model. This series tries to be > conservative about adding in new feature compared to V1. > > This series implements NAPI + kthread 1:1 model for Xen netback. > > This model > - provides better scheduling fairness among vifs > - is prerequisite for implementing multiqueue for Xen network driver > > The first two patches are ground work for the third patch. First one > simplifies code in netback, second one can reduce memory footprint if we > switch to 1:1 model. > > The third patch has the real meat: > - make use of NAPI to mitigate interrupt > - kthreads are not bound to CPUs any more, so that we can take > advantage of backend scheduler and trust it to do the right thing > > Change since V1: > - No page pool in this version. Instead page tracking facility is > removed.Andrew Bennieston has done some performance measurements with (I think) the V1 series and it shows a significant decrease in performance of from-guest traffic even with only two VIFs. Andrew will be able to comment more on this. Andrew, can you also make available your results for others to review? David
Wei Liu
2013-Jun-11 10:15 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:> On 27/05/13 12:29, Wei Liu wrote: > > * This is a xen-devel only post, since we have not reached concesus on > > what to add / remove in this new model. This series tries to be > > conservative about adding in new feature compared to V1. > > > > This series implements NAPI + kthread 1:1 model for Xen netback. > > > > This model > > - provides better scheduling fairness among vifs > > - is prerequisite for implementing multiqueue for Xen network driver > > > > The first two patches are ground work for the third patch. First one > > simplifies code in netback, second one can reduce memory footprint if we > > switch to 1:1 model. > > > > The third patch has the real meat: > > - make use of NAPI to mitigate interrupt > > - kthreads are not bound to CPUs any more, so that we can take > > advantage of backend scheduler and trust it to do the right thing > > > > Change since V1: > > - No page pool in this version. Instead page tracking facility is > > removed. > > Andrew Bennieston has done some performance measurements with (I think) > the V1 series and it shows a significant decrease in performance of > from-guest traffic even with only two VIFs. > > Andrew will be able to comment more on this. > > Andrew, can you also make available your results for others to review? >In my third series there is also simple performance figures attached. Andrew could you please have a look at that as well? If you have time, could you try my third series? In the third series, the only possible performance impact is the new model, which should narrow the problem down. Wei.> David
Andrew Bennieston
2013-Jun-12 13:44 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On 11/06/13 11:15, Wei Liu wrote:> On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote: >> On 27/05/13 12:29, Wei Liu wrote: >>> * This is a xen-devel only post, since we have not reached concesus on >>> what to add / remove in this new model. This series tries to be >>> conservative about adding in new feature compared to V1. >>> >>> This series implements NAPI + kthread 1:1 model for Xen netback. >>> >>> This model >>> - provides better scheduling fairness among vifs >>> - is prerequisite for implementing multiqueue for Xen network driver >>> >>> The first two patches are ground work for the third patch. First one >>> simplifies code in netback, second one can reduce memory footprint if we >>> switch to 1:1 model. >>> >>> The third patch has the real meat: >>> - make use of NAPI to mitigate interrupt >>> - kthreads are not bound to CPUs any more, so that we can take >>> advantage of backend scheduler and trust it to do the right thing >>> >>> Change since V1: >>> - No page pool in this version. Instead page tracking facility is >>> removed. >> >> Andrew Bennieston has done some performance measurements with (I think) >> the V1 series and it shows a significant decrease in performance of >> from-guest traffic even with only two VIFs. >> >> Andrew will be able to comment more on this. >> >> Andrew, can you also make available your results for others to review?Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing detailing the tests I performed and the results I saw, along with some summary text from my analysis. Note that I also performed these tests without manually distributing IRQs across cores, and the performance was, as expected, rather poor. I didn''t include those plots on the Wiki page since they don''t really provide any new information.> In my third series there is also simple performance figures attached. > Andrew could you please have a look at that as well?I had a look at those; I think they agree with my tests where there is overlap. The tests I performed were repeated a number of times and covered a broader range of scenarios and have associated error bars which provide a measure of variability between tests (as well as indicating the statistical significance of differences between tests). The error bars can also be interpreted in terms of fairness; smaller error bars mean that all TCP streams across all VIFs attain similar throughput to each other. Larger error bars mean that there is quite a lot of variation from one stream to another, e.g. as a stream or VIF may be starved of resources.> If you have time, could you try my third series? In the third series, > the only possible performance impact is the new model, which should > narrow the problem down. > Wei.I am going to test the V3 patches as soon as I get the time; hopefully later this week, or early next week. I''ll post the results once I have them. Andrew.
Wei Liu
2013-Jun-13 09:01 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote:> >>Andrew will be able to comment more on this. > >> > >>Andrew, can you also make available your results for others to review? > > Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing > detailing the tests I performed and the results I saw, along with > some summary text from my analysis. >Thanks Andrew! Nice plot and nice analysis. Just one nit, the CPU curves are not very distinguishable. Would you mind not using dotted lines for your next graph? ;-) Wei.
Andrew Bennieston
2013-Jun-13 11:18 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On 13/06/13 10:01, Wei Liu wrote:> On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote: >>>> Andrew will be able to comment more on this. >>>> >>>> Andrew, can you also make available your results for others to review? >> >> Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing >> detailing the tests I performed and the results I saw, along with >> some summary text from my analysis. >> > > Thanks Andrew! Nice plot and nice analysis. > > Just one nit, the CPU curves are not very distinguishable. Would you > mind not using dotted lines for your next graph? ;-) > > > Wei. >The CPU curves are pretty much identical. Using solid lines obscures the interesting data somewhat... I''ll see what I can do, though :) Andrew
Wei Liu
2013-Jun-13 13:06 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On Thu, Jun 13, 2013 at 12:18:01PM +0100, Andrew Bennieston wrote:> On 13/06/13 10:01, Wei Liu wrote: > >On Wed, Jun 12, 2013 at 02:44:17PM +0100, Andrew Bennieston wrote: > >>>>Andrew will be able to comment more on this. > >>>> > >>>>Andrew, can you also make available your results for others to review? > >> > >>Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing > >>detailing the tests I performed and the results I saw, along with > >>some summary text from my analysis. > >> > > > >Thanks Andrew! Nice plot and nice analysis. > > > >Just one nit, the CPU curves are not very distinguishable. Would you > >mind not using dotted lines for your next graph? ;-) > > > > > >Wei. > > > The CPU curves are pretty much identical. Using solid lines obscures > the interesting data somewhat... I''ll see what I can do, though :) >Oh I see. That''s why I can only see one CPU curve. :-) Wei.> Andrew
Andrew Bennieston
2013-Jul-03 12:45 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On 11/06/13 11:15, Wei Liu wrote:> On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote: >> On 27/05/13 12:29, Wei Liu wrote: >>> * This is a xen-devel only post, since we have not reached concesus on >>> what to add / remove in this new model. This series tries to be >>> conservative about adding in new feature compared to V1. >>> >>> This series implements NAPI + kthread 1:1 model for Xen netback. >>> >>> This model >>> - provides better scheduling fairness among vifs >>> - is prerequisite for implementing multiqueue for Xen network driver >>> >>> The first two patches are ground work for the third patch. First one >>> simplifies code in netback, second one can reduce memory footprint if we >>> switch to 1:1 model. >>> >>> The third patch has the real meat: >>> - make use of NAPI to mitigate interrupt >>> - kthreads are not bound to CPUs any more, so that we can take >>> advantage of backend scheduler and trust it to do the right thing >>> >>> Change since V1: >>> - No page pool in this version. Instead page tracking facility is >>> removed. >> >> Andrew Bennieston has done some performance measurements with (I think) >> the V1 series and it shows a significant decrease in performance of >> from-guest traffic even with only two VIFs. >> >> Andrew will be able to comment more on this. >> >> Andrew, can you also make available your results for others to review? >> > > In my third series there is also simple performance figures attached. > Andrew could you please have a look at that as well? > > If you have time, could you try my third series? In the third series, > the only possible performance impact is the new model, which should > narrow the problem down.Wei, I finally have the results from testing your V3 patches. They are available at: http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V3_performance_testing This time, the base for the tests was linux-next, rather than v3.6.11 (mostly to reduce the effort in backporting patches) so the results can''t be directly compared to the V1, but I still ran tests without, then with, your patches, so you should be able to see the direct effect of those patches. The summary is that there is (as expected) no impact on the dom0 -> VM measurements, and the VM -> dom0 measurements are identical with and without the patches up to 4 concurrently transmitting VMs or so, after which the original version outperforms the patched version. The difference becomes less pronounced as the number of TCP streams is increased, though. My conclusion from these results would be that your V3 patches have fairly minimal performance impact, although they should improve _fairness_ (due to the kthread per VIF) on the transmit (into VM) pathway, and simplify the handling of the receive (out of VM) scenario too. In other news, it looks like the throughput in general has improved between 3.6 and -next :) Cheers, Andrew
Wei Liu
2013-Jul-03 16:07 UTC
Re: [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model
On Wed, Jul 03, 2013 at 01:45:13PM +0100, Andrew Bennieston wrote: [...]> >If you have time, could you try my third series? In the third series, > >the only possible performance impact is the new model, which should > >narrow the problem down. > > Wei, I finally have the results from testing your V3 patches. They > are available at: > > http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V3_performance_testing >Thanks, Andrew.> This time, the base for the tests was linux-next, rather than > v3.6.11 (mostly to reduce the effort in backporting patches) so the > results can''t be directly compared to the V1, but I still ran tests > without, then with, your patches, so you should be able to see the > direct effect of those patches. > > The summary is that there is (as expected) no impact on the dom0 -> > VM measurements, and the VM -> dom0 measurements are identical with > and without the patches up to 4 concurrently transmitting VMs or so, > after which the original version outperforms the patched version. > The difference becomes less pronounced as the number of TCP streams > is increased, though. > > My conclusion from these results would be that your V3 patches have > fairly minimal performance impact, although they should improve > _fairness_ (due to the kthread per VIF) on the transmit (into VM) > pathway, and simplify the handling of the receive (out of VM) > scenario too. >I''m happy to know at least my patches don''t have significant negative impact. :-)> In other news, it looks like the throughput in general has improved > between 3.6 and -next :) >Agreed. Wei.> Cheers, > Andrew