Displaying 6 results from an estimated 6 matches for "xen_netbk_tx_build_gops".
2013 Feb 06
0
[PATCH 1/4] xen/netback: shutdown the ring if it contains garbage.
...39;t an insane number of requests
on the ring (i.e. more than would fit in the ring). If the ring
contains garbage then previously is was possible to loop over this
insane number, getting an error each time and therefore not generating
any more pending requests and therefore not exiting the loop in
xen_netbk_tx_build_gops for an externded period.
Also turn various netdev_dbg calls which no precipitate a fatal error
into netdev_err, they are rate limited because the device is shutdown
afterwards.
This fixes at least one known DoS/softlockup of the backend domain.
Signed-off-by: Ian Campbell <ian.campbell@citrix...
2013 Jun 24
3
[PATCH v2] xen-netback: add a pseudo pps rate limit
...allback. */
+ if (vif->remaining_packets < 1) {
+ vif->credit_timeout.data =
+ (unsigned long)vif;
+ vif->credit_timeout.function =
+ tx_credit_callback;
+ mod_timer(&vif->credit_timeout,
+ next_credit);
+
+ return true;
+ }
+
+ return false;
+}
+
static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
{
struct gnttab_copy *gop = netbk->tx_copy_ops, *request_gop;
@@ -1470,6 +1508,13 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
rmb(); /* Ensure that we see the request before we copy it. */
memcpy(&txreq, RING_GET_REQUEST(&vif->...
2013 Apr 30
6
[PATCH net-next 2/2] xen-netback: avoid allocating variable size array on stack
...s & XEN_NETTXF_more_data);
+
+ keep_looping = (!drop_err && (txp++)->flags & XEN_NETTXF_more_data) ||
+ (dropped_tx.flags & XEN_NETTXF_more_data);
+ } while (keep_looping);
if (drop_err) {
netbk_tx_err(vif, first, cons + slots);
@@ -1408,7 +1424,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
!list_empty(&netbk->net_schedule_list)) {
struct xenvif *vif;
struct xen_netif_tx_request txreq;
- struct xen_netif_tx_request txfrags[max_skb_slots];
+ struct xen_netif_tx_request txfrags[XEN_NETIF_NR_SLOTS_MIN];
struct page *page;
struct xen_netif_...
2013 Feb 01
45
netback Oops then xenwatch stuck in D state
...>] notify_remote_via_irq+0xd/0x40
[<ffffffff81543b9b>] xen_netbk_rx_action+0x73b/0x800
[<ffffffff81544c25>] xen_netbk_kthread+0xb5/0xa60
[<ffffffff81080050>] ? finish_task_switch+0x60/0xd0
[<ffffffff81071fe0>] ? wake_up_bit+0x40/0x40
[<ffffffff81544b70>] ? xen_netbk_tx_build_gops+0xa10/0xa10
[<ffffffff81071926>] kthread+0xc6/0xd0
[<ffffffff810037b9>] ? xen_end_context_switch+0x19/0x20
[<ffffffff81071860>] ? kthread_freezable_should_stop+0x70/0x70
[<ffffffff81767c7c>] ret_from_fork+0x7c/0xb0
[<ffffffff81071860>] ? kthread_freezable_sho...
2013 Apr 17
1
Bug#701744: We see the same with Debian wheezy.
...41115.678191]
[<ffffffffa048cddd>] ? xen_netbk_schedule_xenvif+0x35/0xd6 [xen_netback]
Apr 16 16:02:25 hypervisor3 kernel: [2441115.678264]
[<ffffffffa048cf71>] ? netbk_tx_err+0x3f/0x4b [xen_netback]
Apr 16 16:02:25 hypervisor3 kernel: [2441115.678311]
[<ffffffffa048d517>] ? xen_netbk_tx_build_gops+0x59a/0x9e2 [xen_netback]
Apr 16 16:02:25 hypervisor3 kernel: [2441115.678384]
[<ffffffff81004be5>] ? phys_to_machine+0x13/0x1c
Apr 16 16:02:25 hypervisor3 kernel: [2441115.678429]
[<ffffffff810040b9>] ? xen_mc_flush+0x124/0x153
Apr 16 16:02:25 hypervisor3 kernel: [2441115.678474]...
2013 Jun 12
26
Interesting observation with network event notification and batching
Hi all
I''m hacking on a netback trying to identify whether TLB flushes causes
heavy performance penalty on Tx path. The hack is quite nasty (you would
not want to know, trust me).
Basically what is doesn''t is, 1) alter network protocol to pass along
mfns instead of grant references, 2) when the backend sees a new mfn,
map it RO and cache it in its own address space.
With this