Displaying 20 results from an estimated 38 matches for "tot_sg".
Did you mean:
top_sg
2011 Jun 15
3
[PATCH] virtio-net: per cpu 64 bit stats
...skb->len;
+ stats->rx_packets++;
+ u64_stats_update_begin(&stats->syncp);
if (hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
pr_debug("Needs csum!\n");
@@ -515,11 +531,16 @@ static unsigned int free_old_xmit_skbs(s
{
struct sk_buff *skb;
unsigned int len, tot_sgs = 0;
+ struct virtnet_stats __percpu *stats = this_cpu_ptr(vi->stats);
while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
pr_debug("Sent skb %p\n", skb);
- vi->dev->stats.tx_bytes += skb->len;
- vi->dev->stats.tx_packets++;
+
+ u64_stats_update_...
2011 Jun 15
3
[PATCH] virtio-net: per cpu 64 bit stats
...skb->len;
+ stats->rx_packets++;
+ u64_stats_update_begin(&stats->syncp);
if (hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
pr_debug("Needs csum!\n");
@@ -515,11 +531,16 @@ static unsigned int free_old_xmit_skbs(s
{
struct sk_buff *skb;
unsigned int len, tot_sgs = 0;
+ struct virtnet_stats __percpu *stats = this_cpu_ptr(vi->stats);
while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
pr_debug("Sent skb %p\n", skb);
- vi->dev->stats.tx_bytes += skb->len;
- vi->dev->stats.tx_packets++;
+
+ u64_stats_update_...
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 Jun 02
6
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Warning: untested. Posting now to give people chance to
comment on the API.
Changes from v1:
- fix comment in patch 2 to correct confusion noted by Rusty
- rewrite patch 3 along the lines
2011 Jun 02
6
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Warning: untested. Posting now to give people chance to
comment on the API.
Changes from v1:
- fix comment in patch 2 to correct confusion noted by Rusty
- rewrite patch 3 along the lines
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
...ot;Sent skb %p\n", skb);
- u64_stats_update_begin(&stats->syncp);
+ u64_stats_update_begin(&stats->tx_syncp);
stats->tx_bytes += skb->len;
stats->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->tx_syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
dev_kfree_skb_any(skb);
@@ -703,12 +704,16 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
u64 tpackets, tbytes, rpackets, rbytes;
do {
- start = u64_stats_fetch_begin(&stats->syncp);
+ start = u64_stats_fetch_begin(&...
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
...ot;Sent skb %p\n", skb);
- u64_stats_update_begin(&stats->syncp);
+ u64_stats_update_begin(&stats->tx_syncp);
stats->tx_bytes += skb->len;
stats->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_end(&stats->tx_syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
dev_kfree_skb_any(skb);
@@ -703,12 +704,16 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
u64 tpackets, tbytes, rpackets, rbytes;
do {
- start = u64_stats_fetch_begin(&stats->syncp);
+ start = u64_stats_fetch_begin(&...
2012 Oct 16
6
[PATCH 1/5] virtio: move queue_index and num_free fields into core struct virtqueue.
They're generic concepts, so hoist them. This also avoids accessor
functions.
This goes even further than Jason Wang's 17bb6d4088 patch
("virtio-ring: move queue_index to vring_virtqueue") which moved the
queue_index from the specific transport.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/virtio/virtio_mmio.c | 4 ++--
2012 Oct 16
6
[PATCH 1/5] virtio: move queue_index and num_free fields into core struct virtqueue.
They're generic concepts, so hoist them. This also avoids accessor
functions.
This goes even further than Jason Wang's 17bb6d4088 patch
("virtio-ring: move queue_index to vring_virtqueue") which moved the
queue_index from the specific transport.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/virtio/virtio_mmio.c | 4 ++--
2012 Jun 06
2
[V2 RFC net-next PATCH 1/2] virtio_net: convert the statistics into array
...debug("Sent skb %p\n", skb);
u64_stats_update_begin(&stats->syncp);
- stats->tx_bytes += skb->len;
- stats->tx_packets++;
+ stats->data[VIRTNET_TX_BYTES] += skb->len;
+ stats->data[VIRTNET_TX_PACKETS]++;
u64_stats_update_end(&stats->syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
@@ -704,10 +708,10 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
do {
start = u64_stats_fetch_begin(&stats->syncp);
- tpackets = stats->tx_packets;
- tbytes = stats->tx_bytes;
- rpackets = stats->rx_packe...
2012 Jun 06
2
[V2 RFC net-next PATCH 1/2] virtio_net: convert the statistics into array
...debug("Sent skb %p\n", skb);
u64_stats_update_begin(&stats->syncp);
- stats->tx_bytes += skb->len;
- stats->tx_packets++;
+ stats->data[VIRTNET_TX_BYTES] += skb->len;
+ stats->data[VIRTNET_TX_PACKETS]++;
u64_stats_update_end(&stats->syncp);
tot_sgs += skb_vnet_hdr(skb)->num_sg;
@@ -704,10 +708,10 @@ static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
do {
start = u64_stats_fetch_begin(&stats->syncp);
- tpackets = stats->tx_packets;
- tbytes = stats->tx_bytes;
- rpackets = stats->rx_packe...
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues
2012 Nov 27
4
[net-next rfc v7 0/3] Multiqueue virtio-net
Hi all:
This series is an update version of multiqueue virtio-net driver based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the
packets reception and transmission. Please review and comments.
A protype implementation of qemu-kvm support could by found in
git://github.com/jasowang/qemu-kvm-mq.git. To start a guest with two queues, you
could specify the queues