Using 10G interfaces on kernel 3.2 + xen 4.1.2 we''re seeing: dom0 ~9.2Gbps domU *~2.5Gbps* dmesg on domU: XENBUS: Device with no driver: device/vif/0 Is this normal? Thanks Kristoffer -- View this message in context: http://xen.1045712.n5.nabble.com/10gig-in-domU-tp5634876p5634876.html Sent from the Xen - User mailing list archive at Nabble.com.
Am Donnerstag, 12. April 2012, 00:35:22 schrieb Kristoffer Harthing Egefelt:> Using 10G interfaces on kernel 3.2 + xen 4.1.2 we''re seeing: > dom0 ~9.2Gbps > domU *~2.5Gbps* > > dmesg on domU: > XENBUS: Device with no driver: device/vif/0 > > Is this normal?In non-PV environments (hvm) this could be "normal"... Do you use PV? Niels. -- --- Niels Dettenbach Syndicat IT & Internet http://www.syndicat.com --- _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Kristoffer Harthing Egefelt wrote:>Using 10G interfaces on kernel 3.2 + xen 4.1.2 we''re seeing: >dom0 ~9.2Gbps >domU *~2.5Gbps* > >dmesg on domU: >XENBUS: Device with no driver: device/vif/0 > >Is this normal?I''m guessing so. I see those messages on every guest when I boot them, then a short time later there are messages relating to drivers/devices. Eg : [ 0.500122] XENBUS: Device with no driver: device/vbd/51713 [ 0.500130] XENBUS: Device with no driver: device/vbd/51714 [ 0.500137] XENBUS: Device with no driver: device/vbd/51715 [ 0.500146] XENBUS: Device with no driver: device/vif/0 [ 0.500153] XENBUS: Device with no driver: device/vif/1 [ 0.500160] XENBUS: Device with no driver: device/console/0 ... [ 0.575270] Initialising Xen virtual ethernet driver. [ 0.669515] blkfront: xvda1: barrier: enabled [ 0.672064] Setting capacity to 4194304 [ 0.672089] xvda1: detected capacity change from 0 to 2147483648 [ 0.673131] blkfront: xvda2: barrier: enabled [ 0.676030] Setting capacity to 4194304 [ 0.676056] xvda2: detected capacity change from 0 to 2147483648 [ 0.678393] blkfront: xvda3: barrier: enabled [ 0.700179] Setting capacity to 1048576 As to performance, that has always been a weakness with Xen. AIUI, all the virtual network traffic is handled by a single thread in Dom0 and this creates a bottleneck. There is certainly a huge performance difference between accessing an iSCSI volume natively on the client (poor) vs accessing it on Dom0 and passing it through as a virtual block device (good). -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books.
wrote:>I''m guessing so. I see those messages on every guest when I boot >them, then a short time later there are messages relating to >drivers/devices.I neglected to include that these are PV guests. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books.
> I neglected to include that these are PV guests.Reading these: http://www.xen.org/files/xensummit_oracle09/xensummit_networking.pdf and http://www.google.com/url?sa=t&rct=j&q=xen%20domu%2010gb&source=web&cd=1&ved=0CCMQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.187.7341%26rep%3Drep1%26type%3Dpdf&ei=iziHT6_hFcXktQbfw6XSBg&usg=AFQjCNHUdWmeYC4zR5j1p_TFtGAevDaVjw It looks like the issue is solved, ~8 Gbps in a single processor domU using pv net driver. Or am I missing something? Do I need SR-IOV to get more than 2.5Gbps in a single domU ? Is it true that live migration and ACL/QOS in openvswitch does not work with SR-IOV? Thanks Kristoffer -- View this message in context: http://xen.1045712.n5.nabble.com/10gig-in-domU-tp5634876p5636730.html Sent from the Xen - User mailing list archive at Nabble.com.
Kristoffer Harthing Egefelt wrote:>http://www.xen.org/files/xensummit_oracle09/xensummit_networking.pdf > >and > >http://www.google.com/url?sa=t&rct=j&q=xen%20domu%2010gb&source=web&cd=1&ved=0CCMQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.187.7341%26rep%3Drep1%26type%3Dpdf&ei=iziHT6_hFcXktQbfw6XSBg&usg=AFQjCNHUdWmeYC4zR5j1p_TFtGAevDaVjw > >It looks like the issue is solved, ~8 Gbps in a single processor domU using >pv net driver. Or am I missing something?I think you are missing the requirement for modified software and specialised I/O hardware. Neither of those papers describe dealing with the problem using a standard NIC (ie anything you''ll find in a commodity server). -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books.
> I think you are missing the requirement for modified software and > specialised I/O hardware. Neither of those papers describe dealing > with the problem using a standard NIC (ie anything you''ll find in a >commodity server).Allright ;-) So the conclusion is that if a server need more than 2Gbps, (xen)virtualization will not provide it? The NIC''s we currently use (qlogic 8042) can be partitioned in 4 interfaces - I''ll try with SR-IOV and see if that will improve the performance. Thanks Kristoffer -- View this message in context: http://xen.1045712.n5.nabble.com/10gig-in-domU-tp5634876p5640440.html Sent from the Xen - User mailing list archive at Nabble.com.
Am Samstag, 14. April 2012, 04:26:31 schrieb Kristoffer Harthing Egefelt:> So the conclusion is that if a server need more than 2Gbps, > (xen)virtualization will not provide it?Not shure, but may be PCI passthrough could be a suitable solution here if you want to use the interface in just one VM. best regards, Niels. -- --- Niels Dettenbach Syndicat IT & Internet http://www.syndicat.com --- _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users