search for: rx_queue_sysfs_op

Displaying 15 results from an estimated 15 matches for "rx_queue_sysfs_op".

Did you mean: rx_queue_sysfs_ops
2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...d_kobject) to call sysfs_create_group on the per-netdev attribute group (if non-NULL), adding the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_sysfs_ops presume that all rx queue sysfs operations are performed on attributes of type rx_queue_attribute. That type will need to be moved from net-sysfs.c to a header file like netdevice.h so that the type can be used in virtio-net when we declare the mergeable_rx_buffer_size attribute. The last issue i...
2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...d_kobject) to call sysfs_create_group on the per-netdev attribute group (if non-NULL), adding the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_sysfs_ops presume that all rx queue sysfs operations are performed on attributes of type rx_queue_attribute. That type will need to be moved from net-sysfs.c to a header file like netdevice.h so that the type can be used in virtio-net when we declare the mergeable_rx_buffer_size attribute. The last issue i...
2014 Jan 13
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...p on the > per-netdev attribute group (if non-NULL), adding the attributes in > the group to the RX queue kobject. Exactly. > That should allow us to have per-RX queue attributes that are > device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype > and rx_queue_sysfs_ops presume that all rx queue sysfs operations are > performed on attributes of type rx_queue_attribute. That type will need > to be moved from net-sysfs.c to a header file like netdevice.h so that > the type can be used in virtio-net when we declare the > mergeable_rx_buffer_size attribut...
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...ar *buf); - ssize_t (*store)(struct netdev_rx_queue *queue, - struct rx_queue_attribute *attr, const char *buf, size_t len); -}; +#ifdef CONFIG_SYSFS #define to_rx_queue_attr(_attr) container_of(_attr, \ struct rx_queue_attribute, attr) @@ -543,6 +533,7 @@ static const struct sysfs_ops rx_queue_sysfs_ops = { .store = rx_queue_attr_store, }; +#ifdef CONFIG_RPS static ssize_t show_rps_map(struct netdev_rx_queue *queue, struct rx_queue_attribute *attribute, char *buf) { @@ -718,16 +709,20 @@ static struct rx_queue_attribute rps_cpus_attribute = static struct rx_queue_attribute rps_dev...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...ar *buf); - ssize_t (*store)(struct netdev_rx_queue *queue, - struct rx_queue_attribute *attr, const char *buf, size_t len); -}; +#ifdef CONFIG_SYSFS #define to_rx_queue_attr(_attr) container_of(_attr, \ struct rx_queue_attribute, attr) @@ -543,6 +533,7 @@ static const struct sysfs_ops rx_queue_sysfs_ops = { .store = rx_queue_attr_store, }; +#ifdef CONFIG_RPS static ssize_t show_rps_map(struct netdev_rx_queue *queue, struct rx_queue_attribute *attribute, char *buf) { @@ -718,16 +709,20 @@ static struct rx_queue_attribute rps_cpus_attribute = static struct rx_queue_attribute rps_dev...
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation