search for: sysfs_groups

Displaying 18 results from an estimated 18 matches for "sysfs_groups".

2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...eliminate much of the above, including most of (1) - (4). I believe our choices for what to do for the next patchset include: (a) Use debugfs as is currently done, removing any optional features listed above that are deemed unnecessary. (b) Add a per-netdev sysfs attribute group to net_device->sysfs_groups. Each attribute would display the mergeable packet buffer size for a given RX queue, and there would be max_queue_pairs attributes in total. This is already supported by net/core/net-sysfs.c:netdev_register_kobject(), but means that we would have a static set of per-RX queue files for all RX queues...
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...eliminate much of the above, including most of (1) - (4). I believe our choices for what to do for the next patchset include: (a) Use debugfs as is currently done, removing any optional features listed above that are deemed unnecessary. (b) Add a per-netdev sysfs attribute group to net_device->sysfs_groups. Each attribute would display the mergeable packet buffer size for a given RX queue, and there would be max_queue_pairs attributes in total. This is already supported by net/core/net-sysfs.c:netdev_register_kobject(), but means that we would have a static set of per-RX queue files for all RX queues...
2014 Jan 12
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...ove, including most of (1) - (4). I believe our choices for what to > do for the next patchset include: > (a) Use debugfs as is currently done, removing any optional features > listed above that are deemed unnecessary. > > (b) Add a per-netdev sysfs attribute group to net_device->sysfs_groups. > Each attribute would display the mergeable packet buffer size for a given > RX queue, and there would be max_queue_pairs attributes in total. This > is already supported by net/core/net-sysfs.c:netdev_register_kobject(), > but means that we would have a static set of per-RX queue fil...
2014 Jan 13
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Sorry I missed this important piece of information, it appears that netdev_queue (the TX equivalent of netdev_rx_queue) already has decoupled itself from CONFIG_XPS due to an attribute, queue_trans_timeout, that does not depend on XPS functionality. So it seems that something somewhat equivalent has already happened on the TX side. Best, Mike
2014 Jan 13
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Sorry I missed this important piece of information, it appears that netdev_queue (the TX equivalent of netdev_rx_queue) already has decoupled itself from CONFIG_XPS due to an attribute, queue_trans_timeout, that does not depend on XPS functionality. So it seems that something somewhat equivalent has already happened on the TX side. Best, Mike
2014 Jan 08
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Add initial support for debugfs to virtio-net. Each virtio-net network > device will have a directory under /virtio-net in debugfs. The > per-network device directory will contain one sub-directory per active, > enabled receive queue. If mergeable receive buffers are enabled, each > receive queue directory will contain a read-only file
2014 Jan 08
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Add initial support for debugfs to virtio-net. Each virtio-net network > device will have a directory under /virtio-net in debugfs. The > per-network device directory will contain one sub-directory per active, > enabled receive queue. If mergeable receive buffers are enabled, each > receive queue directory will contain a read-only file
2014 Jan 14
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...e mixed without any indication. For example, all virtio-net attributes for netdev eth0 queue N would be of the form: /sys/class/net/eth0/queues/rx-N/<attribute name> FWIW, the bonding netdev has a similar sysfs issue and uses a per-netdev attribute group (stored in the 'sysfs_groups' field of struct netdevice) In the case of bonding, the attribute group is named, so device-independent netdev attributes are found in /sys/class/net/eth0/<attribute name> while bonding attributes are placed in /sys/class/net/eth0/bonding/<attribute name>. So it seems like there is...
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...f CONFIG_RPS +#ifdef CONFIG_SYSFS struct netdev_rx_queue *_rx; /* Number of RX queues allocated at register_netdev() time */ @@ -1424,6 +1437,8 @@ struct net_device { struct device dev; /* space for optional device, statistics, and wireless sysfs groups */ const struct attribute_group *sysfs_groups[4]; + /* space for optional per-rx queue attributes */ + const struct attribute_group *sysfs_rx_queue_group; /* rtnetlink link ops */ const struct rtnl_link_ops *rtnl_link_ops; @@ -2374,7 +2389,7 @@ static inline bool netif_is_multiqueue(const struct net_device *dev) int netif_set_real_num...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...f CONFIG_RPS +#ifdef CONFIG_SYSFS struct netdev_rx_queue *_rx; /* Number of RX queues allocated at register_netdev() time */ @@ -1424,6 +1437,8 @@ struct net_device { struct device dev; /* space for optional device, statistics, and wireless sysfs groups */ const struct attribute_group *sysfs_groups[4]; + /* space for optional per-rx queue attributes */ + const struct attribute_group *sysfs_rx_queue_group; /* rtnetlink link ops */ const struct rtnl_link_ops *rtnl_link_ops; @@ -2374,7 +2389,7 @@ static inline bool netif_is_multiqueue(const struct net_device *dev) int netif_set_real_num...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation