search for: rx_queue_attribut

Displaying 20 results from an estimated 36 matches for "rx_queue_attribut".

Did you mean: rx_queue_attribute
2014 Jan 16
0
[PATCH net-next v3 4/5] net-sysfs: add support for device-specific rx queue sysfs attributes
...ev_rx_queue { +#ifdef CONFIG_RPS struct rps_map __rcu *rps_map; struct rps_dev_flow_table __rcu *rps_flow_table; +#endif struct kobject kobj; struct net_device *dev; } ____cacheline_aligned_in_smp; -#endif /* CONFIG_RPS */ + +/* + * RX queue sysfs structures and functions. + */ +struct rx_queue_attribute { + struct attribute attr; + ssize_t (*show)(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attr, char *buf); + ssize_t (*store)(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attr, const char *buf, size_t len); +}; #ifdef CONFIG_XPS /* @@ -1313,7 +1326,7 @@ s...
2014 Jan 16
0
[PATCH net-next v4 4/6] net-sysfs: add support for device-specific rx queue sysfs attributes
...ev_rx_queue { +#ifdef CONFIG_RPS struct rps_map __rcu *rps_map; struct rps_dev_flow_table __rcu *rps_flow_table; +#endif struct kobject kobj; struct net_device *dev; } ____cacheline_aligned_in_smp; -#endif /* CONFIG_RPS */ + +/* + * RX queue sysfs structures and functions. + */ +struct rx_queue_attribute { + struct attribute attr; + ssize_t (*show)(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attr, char *buf); + ssize_t (*store)(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attr, const char *buf, size_t len); +}; #ifdef CONFIG_XPS /* @@ -1313,7 +1326,7 @@ s...
2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_sysfs_ops presume that all rx queue sysfs operations are performed on attributes of type rx_queue_attribute. That type will need to be moved from net-sysfs.c to a header file like netdevice.h so that the type can be used in virtio-net when we declare the mergeable_rx_buffer_size attribute. The last issue is how the rx_queue_attribute 'show' function implementation for mergeable_rx_buffer_size w...
2014 Jan 12
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...the attributes in the group to the RX queue kobject. That should allow us to have per-RX queue attributes that are device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype and rx_queue_sysfs_ops presume that all rx queue sysfs operations are performed on attributes of type rx_queue_attribute. That type will need to be moved from net-sysfs.c to a header file like netdevice.h so that the type can be used in virtio-net when we declare the mergeable_rx_buffer_size attribute. The last issue is how the rx_queue_attribute 'show' function implementation for mergeable_rx_buffer_size w...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This
2014 Jan 13
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
...to the RX queue kobject. Exactly. > That should allow us to have per-RX queue attributes that are > device-specific. I'm not a sysfs expert, but it seems that rx_queue_ktype > and rx_queue_sysfs_ops presume that all rx queue sysfs operations are > performed on attributes of type rx_queue_attribute. That type will need > to be moved from net-sysfs.c to a header file like netdevice.h so that > the type can be used in virtio-net when we declare the > mergeable_rx_buffer_size attribute. > > The last issue is how the rx_queue_attribute 'show' function > implementation...
2014 Jan 16
0
[PATCH net-next v4 6/6] virtio-net: initial rx sysfs support, export mergeable rx buffer size
...); + len = get_mergeable_buf_len(&rq->mrg_avg_pkt_len); if (unlikely(!skb_page_frag_refill(len, alloc_frag, gfp))) return -ENOMEM; @@ -1594,6 +1601,33 @@ err: return ret; } +#ifdef CONFIG_SYSFS +static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attribute, char *buf) +{ + struct virtnet_info *vi = netdev_priv(queue->dev); + unsigned int queue_index = get_netdev_rx_queue_index(queue); + struct ewma *avg; + + BUG_ON(queue_index >= vi->max_queue_pairs); + avg = &vi->rq[queue_index].mrg_avg_pkt_len; + return sprintf(buf, &quo...
2014 Jan 16
0
[PATCH net-next v3 5/5] virtio-net: initial rx sysfs support, export mergeable rx buffer size
...eq); ewma_init(&vi->rq[i].mrg_avg_pkt_len, 1, RECEIVE_AVG_WEIGHT); sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg)); } @@ -1594,6 +1615,39 @@ err: return ret; } +#ifdef CONFIG_SYSFS +static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue, + struct rx_queue_attribute *attribute, char *buf) +{ + struct virtnet_info *vi = netdev_priv(queue->dev); + unsigned int queue_index = get_netdev_rx_queue_index(queue); + struct receive_queue *rq; + struct ewma avg; + unsigned int start; + + BUG_ON(queue_index >= vi->max_queue_pairs); + rq = &vi->rq[queue_in...
2014 Jan 16
2
[PATCH net-next v3 5/5] virtio-net: initial rx sysfs support, export mergeable rx buffer size
...kt_len, 1, RECEIVE_AVG_WEIGHT); > sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg)); > } > @@ -1594,6 +1615,39 @@ err: > return ret; > } > > +#ifdef CONFIG_SYSFS > +static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue, > + struct rx_queue_attribute *attribute, char *buf) > +{ > + struct virtnet_info *vi = netdev_priv(queue->dev); > + unsigned int queue_index = get_netdev_rx_queue_index(queue); > + struct receive_queue *rq; > + struct ewma avg; > + unsigned int start; > + > + BUG_ON(queue_index >= vi->max_queu...
2014 Jan 16
2
[PATCH net-next v3 5/5] virtio-net: initial rx sysfs support, export mergeable rx buffer size
...kt_len, 1, RECEIVE_AVG_WEIGHT); > sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg)); > } > @@ -1594,6 +1615,39 @@ err: > return ret; > } > > +#ifdef CONFIG_SYSFS > +static ssize_t mergeable_rx_buffer_size_show(struct netdev_rx_queue *queue, > + struct rx_queue_attribute *attribute, char *buf) > +{ > + struct virtnet_info *vi = netdev_priv(queue->dev); > + unsigned int queue_index = get_netdev_rx_queue_index(queue); > + struct receive_queue *rq; > + struct ewma avg; > + unsigned int start; > + > + BUG_ON(queue_index >= vi->max_queu...
2017 Mar 29
1
[PATCH] virtio_net: fix support for small rings
...BUG_ON(queue_index >= vi->max_queue_pairs); avg = &vi->rq[queue_index].mrg_avg_pkt_len; - return sprintf(buf, "%u\n", get_mergeable_buf_len(avg)); + return sprintf(buf, "%u\n", + get_mergeable_buf_len(&vi->rq[queue_index], avg)); } static struct rx_queue_attribute mergeable_rx_buffer_size_attribute = -- MST
2017 Mar 29
1
[PATCH] virtio_net: fix support for small rings
...BUG_ON(queue_index >= vi->max_queue_pairs); avg = &vi->rq[queue_index].mrg_avg_pkt_len; - return sprintf(buf, "%u\n", get_mergeable_buf_len(avg)); + return sprintf(buf, "%u\n", + get_mergeable_buf_len(&vi->rq[queue_index], avg)); } static struct rx_queue_attribute mergeable_rx_buffer_size_attribute = -- MST
2017 Mar 29
1
[PATCH v2] virtio_net: fix support for small rings
...BUG_ON(queue_index >= vi->max_queue_pairs); avg = &vi->rq[queue_index].mrg_avg_pkt_len; - return sprintf(buf, "%u\n", get_mergeable_buf_len(avg)); + return sprintf(buf, "%u\n", + get_mergeable_buf_len(&vi->rq[queue_index], avg)); } static struct rx_queue_attribute mergeable_rx_buffer_size_attribute = -- MST