Displaying 11 results from an estimated 11 matches for "nicstat".
Did you mean:
nicstate
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
...patch, we
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=hn0,fd=10,fd=11,...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 converts virtio-net/vhost to be multiple capable. The vhost device were
created per tx/rx queue pairs as usual.
Changes from V1:
- rebase to the latest
- fix me...
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
...patch, we
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=hn0,fd=10,fd=11,...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 converts virtio-net/vhost to be multiple capable. The vhost device were
created per tx/rx queue pairs as usual.
Changes from V1:
- rebase to the latest
- fix me...
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
...e
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=h0,queues=2,fd=10,fd=11 ...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 implement 1:1 mapping of tx/rx virtqueue pairs with vhost_net backend.
Patch 5 converts virtio-net to multiqueue device, after this patch, multiqueue
virtio-net...
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
...e
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=h0,queues=2,fd=10,fd=11 ...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 implement 1:1 mapping of tx/rx virtqueue pairs with vhost_net backend.
Patch 5 converts virtio-net to multiqueue device, after this patch, multiqueue
virtio-net...
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0 73.65 1471.3 74.00 62.21 0.00
04:35:43 xvm15_0 161.6 5.92 112.0 82.24 1477.4 73.68 68.60 0.00
What is the expected accura...
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0 73.65 1471.3 74.00 62.21 0.00
04:35:43 xvm15_0 161.6 5.92 112.0 82.24 1477.4 73.68 68.60 0.00
What is the expected accura...
2007 Oct 17
3
Dtrace scripts for performance data gathering
I am looking for Dtrace scripts that can be used to
collect data during performance tests. I am especially
interested in IO but CPU, memory, threads, etc are needed
as well.
Thanks,
Dave
2023 Mar 06
0
[PATCH v4 01/15] vdpa net: move iova tree creation from init to start
...ize;
> > >>> }
> > >>>
> > >>> +/** From any vdpa net client, get the netclient of first queue pair */
> > >>> +static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
> > >>> +{
> > >>> + NICState *nic = qemu_get_nic(s->nc.peer);
> > >>> + NetClientState *nc0 = qemu_get_peer(nic->ncs, 0);
> > >>> +
> > >>> + return DO_UPCAST(VhostVDPAState, nc, nc0);
> > >>> +}
> > >>> +
> > >>> +static voi...
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
...Okay, let''s imagine I switched to the secodary host because had a
problem with the primary. Now it''s repaired, now I want my redundancy back.
* "sndradm -E -f ...." on both hosts - works.
* "sndradm -u -r" on the primary for refreshing the primary - works.
`nicstat` shows me a bit of traffic.
Good, let''s switch back to the primary. Actual status: zpool is imported
on the secondary and NOT imported on the primary.
* "zpool export tank" on the secondary - *kernel panic*
Sadly, the machine dies fast, I don''t see the kernel panic w...
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever!
Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9.
I have two machines, both identical 800MHz P3 with