Displaying 18 results from an estimated 18 matches for "_rx".
Did you mean:
_r
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...el Dalton wrote:
[...]
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
[...]
> @@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
> #endif
> }
>
> +#ifdef CONFIG_SYSFS
> +static inline unsigned int get_netdev_rx_queue_index(
> + struct netdev_rx_queue *queue)
> +{
> + struct net_device *dev = queue->dev;
> + int i;
> +
> + for (i = 0; i < dev->num_rx_queues; i++)
> + if (queue == &dev->_rx[i])
> + break;
Why write a loop when you can do:
i = queue - dev->_rx...
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...el Dalton wrote:
[...]
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
[...]
> @@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct net_device *to_dev,
> #endif
> }
>
> +#ifdef CONFIG_SYSFS
> +static inline unsigned int get_netdev_rx_queue_index(
> + struct netdev_rx_queue *queue)
> +{
> + struct net_device *dev = queue->dev;
> + int i;
> +
> + for (i = 0; i < dev->num_rx_queues; i++)
> + if (queue == &dev->_rx[i])
> + break;
Why write a loop when you can do:
i = queue - dev->_rx...
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...device.h
@@ -668,15 +668,28 @@ extern struct rps_sock_flow_table __rcu *rps_sock_flow_table;
bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id,
u16 filter_id);
#endif
+#endif /* CONFIG_RPS */
/* This structure contains an instance of an RX queue. */
struct netdev_rx_queue {
+#ifdef CONFIG_RPS
struct rps_map __rcu *rps_map;
struct rps_dev_flow_table __rcu *rps_flow_table;
+#endif
struct kobject kobj;
struct net_device *dev;
} ____cacheline_aligned_in_smp;
-#endif /* CONFIG_RPS */
+
+/*
+ * RX queue sysfs structures and functions.
+ */
+struct rx_qu...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...ibutes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton <mwdalton at google.com>
---
v3->v4: Simplify by removing loop in get_netdev_rx_queue_index.
include/linux/netdevice.h | 35 +++++++++++++++++++++++++++++++----
net/core/dev.c | 12 ++++++------
net/core/net-sysfs.c | 33 ++++++++++++++++-----------------
3 files changed, 53 insertions(+), 27 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linu...
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
On Jan 16, 2014 at 10:57 AM, Ben Hutchings <bhutchings at solarflare.com> wrote:
> Why write a loop when you can do:
> i = queue - dev->_rx;
Good point, the loop approach was done in get_netdev_queue_index --
I agree your fix is faster and simpler. I'll fix in next patchset.
Thanks!
Best,
Mike
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
On Thu, 2014-01-16 at 11:07 -0800, Michael Dalton wrote:
> On Jan 16, 2014 at 10:57 AM, Ben Hutchings <bhutchings at solarflare.com> wrote:
> > Why write a loop when you can do:
> > i = queue - dev->_rx;
> Good point, the loop approach was done in get_netdev_queue_index --
> I agree your fix is faster and simpler. I'll fix in next patchset.
It's simpler but we don't know if it's faster (and I don't believe that
matters for the current usage).
If one of these functions s...
2014 Jan 16
2
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
On Thu, 2014-01-16 at 11:07 -0800, Michael Dalton wrote:
> On Jan 16, 2014 at 10:57 AM, Ben Hutchings <bhutchings at solarflare.com> wrote:
> > Why write a loop when you can do:
> > i = queue - dev->_rx;
> Good point, the loop approach was done in get_netdev_queue_index --
> I agree your fix is faster and simpler. I'll fix in next patchset.
It's simpler but we don't know if it's faster (and I don't believe that
matters for the current usage).
If one of these functions s...
2020 Mar 26
0
[PATCH nbdkit 9/9] tests/old-plugins: Add plugin from nbdkit 1.18.2.
...f_?|e$+_s6XxA5!Dbp2C)y#v>z
zCUp;R*QMobyr?(uNGUa;A6uN7!s;yk?I?S4mZGFo&}lr~b+!7brzpvL)*aal5b$*5
zj4mQvZ!5eH5DsjBHZMHd-sNhehh<t%IXoR@N$44VqV>J3b$b;@o??uLO1GX<sPBb?
zTx~m<jzdpT05N+Q^#HJr%EDng02TMX%ECPQiFdSeTv5uDB_O%(sVvOVD!0>fdhWfI
zg-%DKTdyn}?7pL}(B<jiHU&Imxsk5+pFodi-EK0yM(_Rx!VSU7`|Iwwwy?Cz+gVqb
zx3dZNDI*^iA}n2Q*S5OCoSysTyz7|!FJ5TrETi#6e|Lee=ar3~j>&g(`fA*TK%WLW
z<~O3u=>AhWooYrf(md~&9O8V7!PoOgD7uyl41z#vCdGRBXQsbB`YHH&I}AR}*VKd3
zu9meqigq(J9P`i+AORUcZuPAl({|A0D?MSH4`nnS%%f{TW0}ebV{JelB2A`;l1X@n
z8WY}uy&YG3;FAJAGWSAPO-G=xoTB<^H2fIu!0_6>-89t&l...
2012 Nov 26
13
[PATCH 0 of 4] Minios improvements for app development
This patch series contains a set of patches making minios rather easier
to use, from an application development point of view.
Overview of patches:
1 Command line argument parsing support, from Xen.
2 Weak console handler function.
3 Build system tweaks for application directories.
4 Trailing whitespace cleanup. (because it is very messy)
Patch 4 is likely to be more controversial than
2020 Mar 26
15
[PATCH nbdkit 0/9] Create libnbdkit.so
This creates libnbdkit.so as discussed in the following thread:
https://www.redhat.com/archives/libguestfs/2020-March/thread.html#00203
test-delay-shutdown.sh fails for unclear reasons.
This series starts by reverting "tests: Don't strand hung nbdkit
processes" which is because several other tests fail randomly unless I
revert this patch. I didn't investigate this yet so it