Displaying 11 results from an estimated 11 matches for "nicstate".
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
...patch, we
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=hn0,fd=10,fd=11,...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 converts virtio-net/vhost to be multiple capable. The vhost device were
created per tx/rx queue pairs as usual.
Changes from V1:
- rebase to the latest
- fix mem...
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
...patch, we
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=hn0,fd=10,fd=11,...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 converts virtio-net/vhost to be multiple capable. The vhost device were
created per tx/rx queue pairs as usual.
Changes from V1:
- rebase to the latest
- fix mem...
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
...e
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=h0,queues=2,fd=10,fd=11 ...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 implement 1:1 mapping of tx/rx virtqueue pairs with vhost_net backend.
Patch 5 converts virtio-net to multiqueue device, after this patch, multiqueue
virtio-net d...
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
...e
could pass multiple file descriptors to a signle netdev by:
qemu -netdev tap,id=h0,queues=2,fd=10,fd=11 ...
Patch 2 introduce generic helpers in tap to attach or detach a file descriptor
from a tap device, emulated nics could use this helper to enable/disable queues.
Patch 3 modifies the NICState to allow multiple VLANClientState to be stored in
it, with this patch, qemu has basic support of multiple capable tap backend.
Patch 4 implement 1:1 mapping of tx/rx virtqueue pairs with vhost_net backend.
Patch 5 converts virtio-net to multiqueue device, after this patch, multiqueue
virtio-net d...
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set
that for a VNIC, the domU gets no traffic at all (maybe the occassional
packet). Is the minimum too low?
If I set a maximum of 2000Kbits/second, I get this from nicstat
(expecting around 250Kbytes/s total:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
04:35:38 xvm15_0 146.6 5.32 102.0
2007 Oct 17
3
Dtrace scripts for performance data gathering
I am looking for Dtrace scripts that can be used to
collect data during performance tests. I am especially
interested in IO but CPU, memory, threads, etc are needed
as well.
Thanks,
Dave
2023 Mar 06
0
[PATCH v4 01/15] vdpa net: move iova tree creation from init to start
...ize;
> > >>> }
> > >>>
> > >>> +/** From any vdpa net client, get the netclient of first queue pair */
> > >>> +static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
> > >>> +{
> > >>> + NICState *nic = qemu_get_nic(s->nc.peer);
> > >>> + NetClientState *nc0 = qemu_get_peer(nic->ncs, 0);
> > >>> +
> > >>> + return DO_UPCAST(VhostVDPAState, nc, nc0);
> > >>> +}
> > >>> +
> > >>> +static void...
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi,
I''m struggling to get a stable ZFS replication using Solaris 10 110/06
(actual patches) and AVS 4.0 for several weeks now. We tried it on
VMware first and ended up in kernel panics en masse (yes, we read Jim
Dunham''s blog articles :-). Now we try on the real thing, two X4500
servers. Well, I have no trouble replicating our kernel panics there,
too ... but I think I
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever!
Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9.
I have two machines, both identical 800MHz P3 with