search for: rx_queue_add_kobject

Displaying 17 results from an estimated 17 matches for "rx_queue_add_kobject".

2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...r_netdevice(). (2) Declare an attribute group containing the mergeable_rx_buffer_size attribute in virtio-net, and initialize the per-netdevice group pointer to the address of this group in virtnet_probe before register_netdevice (3) In net-sysfs, modify net_rx_queue_update_kobjects (or rx_queue_add_kobject) to call sysfs_create_group on the per-netdev attribute group (if non-NULL), adding the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_...
2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...r_netdevice(). (2) Declare an attribute group containing the mergeable_rx_buffer_size attribute in virtio-net, and initialize the per-netdevice group pointer to the address of this group in virtnet_probe before register_netdevice (3) In net-sysfs, modify net_rx_queue_update_kobjects (or rx_queue_add_kobject) to call sysfs_create_group on the per-netdev attribute group (if non-NULL), adding the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_...
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...@@ -743,6 +738,7 @@ static void rx_queue_release(struct kobject *kobj) RCU_INIT_POINTER(queue->rps_flow_table, NULL); call_rcu(&flow_table->rcu, rps_dev_flow_table_release); } +#endif memset(kobj, 0, sizeof(*kobj)); dev_put(queue->dev); @@ -767,21 +763,27 @@ static int rx_queue_add_kobject(struct net_device *net, int index) kobject_put(kobj); return error; } + if (net->sysfs_rx_queue_group) + sysfs_create_group(kobj, net->sysfs_rx_queue_group); kobject_uevent(kobj, KOBJ_ADD); dev_hold(queue->dev); return error; } -#endif /* CONFIG_RPS */ +#endif /* CONFIG...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...@@ -743,6 +738,7 @@ static void rx_queue_release(struct kobject *kobj) RCU_INIT_POINTER(queue->rps_flow_table, NULL); call_rcu(&flow_table->rcu, rps_dev_flow_table_release); } +#endif memset(kobj, 0, sizeof(*kobj)); dev_put(queue->dev); @@ -767,21 +763,27 @@ static int rx_queue_add_kobject(struct net_device *net, int index) kobject_put(kobj); return error; } + if (net->sysfs_rx_queue_group) + sysfs_create_group(kobj, net->sysfs_rx_queue_group); kobject_uevent(kobj, KOBJ_ADD); dev_hold(queue->dev); return error; } -#endif /* CONFIG_RPS */ +#endif /* CONFIG...
2014 Jan 16
1
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
From: David Miller <davem at davemloft.net> Date: Thu, 16 Jan 2014 15:28:00 -0800 (PST) > All 6 patches applied. Actually, I reverted, please resubmit this series with the following build warning corrected: net/core/net-sysfs.c: In function ?rx_queue_add_kobject?: net/core/net-sysfs.c:767:21: warning: ignoring return value of ?sysfs_create_group?, declared with attribute warn_unused_result [-Wunused-result] Thanks.
2014 Jan 16
1
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
From: David Miller <davem at davemloft.net> Date: Thu, 16 Jan 2014 15:28:00 -0800 (PST) > All 6 patches applied. Actually, I reverted, please resubmit this series with the following build warning corrected: net/core/net-sysfs.c: In function ?rx_queue_add_kobject?: net/core/net-sysfs.c:767:21: warning: ignoring return value of ?sysfs_create_group?, declared with attribute warn_unused_result [-Wunused-result] Thanks.
2014 Jan 13
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...eclare an attribute group containing the mergeable_rx_buffer_size > attribute in virtio-net, and initialize the per-netdevice group pointer > to the address of this group in virtnet_probe before register_netdevice > (3) In net-sysfs, modify net_rx_queue_update_kobjects > (or rx_queue_add_kobject) to call sysfs_create_group on the > per-netdev attribute group (if non-NULL), adding the attributes in > the group to the RX queue kobject. Exactly. > That should allow us to have per-RX queue attributes that are > device-specific. I'm not a sysfs expert, but it seems th...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation