I have guest running under xen that has raid controler that gives me Timing cached reads: 6754 MB in 1.99 seconds = 3390.99 MB/sec Timing buffered disk reads: 536 MB in 3.00 seconds = 178.49 MB/sec Samba 3.6.6 gives me low performance I get around 40 mb/s over gigabit and 10 over wireless N. Is there a way to configure domu to have 10 gigabit interface and improve samba performance _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
jacek burghardt wrote:>I have guest running under xen that has raid controler that gives me > Timing cached reads: 6754 MB in 1.99 seconds = 3390.99 MB/sec > Timing buffered disk reads: 536 MB in 3.00 seconds = 178.49 MB/sec >Samba 3.6.6 gives me low performance I get around 40 mb/s over >gigabit and 10 over wireless N. >Is there a way to configure domu to have 10 gigabit interface and >improve samba performanceFirstly, your DomU is limited to the speed of the host NICs - any reference to 100Mbps NICs is just a case what''s emulated, not an indication that the speed is restricted to 100Mbps. Note: I''s some time since I''ve used Samba, so this may be out of date ... Samba has many config options which can affect throughput. Some are set "conservatively safe but slow" which is one reason it sometime benchmarks badly against a Windows server. I''d suggest looking for resources related to performance tuning Samba. Also, DomU network access relies on a thread in Dom0. It''s well known that this can be a significant performance bottleneck. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Simon Hobson <linux@thehobsons.co.uk> schrieb:>Firstly, your DomU is limited to the speed of the host NICs - any >reference to 100Mbps NICs is just a case what''s emulated, not an >indication that the speed is restricted to 100Mbps.Does this mean in practice that any DomU to DomU virtual NIC troughput is limited to / by the hosts NIC hardware or did i understand something completely wrong here?>Samba has many config options which can affect throughput. Some are >set "conservatively safe but slow" which is one reason it sometime >benchmarks badly against a Windows server. I''d suggest looking for >resources related to performance tuning Samba.From my experiences with "current" sambas (up from 3.x) the performance relevant adjustors just realize relatively "small" optimizations as samba acts very flexible optimized in most scenarios i knew. You might drive with some buffers in samba config (some of them "just" driving the NICs OS subsys, so it may depend from your OS'' config too).>Also, DomU network access relies on a thread in Dom0. It''s well known >that this can be a significant performance bottleneck.Sorry, but can you explain this a bit more? This is partly new to me and sounds interesting... many thanks in advance, Niels. - -- Niels Dettenbach Syndicat IT&Internet http://www.syndicat.com -----BEGIN PGP SIGNATURE----- Version: APG v1.0.8 iIEEAREIAEEFAlAVcmg6HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDYtmAJ4oSKaRkyYQ ya8bNGvdvjGiO9+hmgCfe0WEUuZIyVpb5aST35Ia9Rcm74c=PY+b -----END PGP SIGNATURE-----
Niels Dettenbach wrote:> >Also, DomU network access relies on a thread in Dom0. It''s well known >>that this can be a significant performance bottleneck. > >Sorry, but can you explain this a bit more? This is partly new to me >and sounds interesting...I''m no expert, I''m sure someone will pop up and tell me it''s all wrong ! This is also from the perspective of a PV guest (or at least using PV drivers) - there are some differences for HV guests. DomU network traffic is handled by Dom0. Ie, when DomU "sends" a network packet, it stuffs it out though what looks like a NIC but is in fact an interface to code running in Dom0. This thread running in Dom0 then deals with passing the packet through to the right place - in a bridge setup, this will mean presenting it to the bridge as though it came from a NIC. See this image (under DomU and Dom0 on real network) : http://new-wiki.xen.org/old-wiki/xenwiki/XenNetworkingUsecase.html The key thing is that eth0 in Dom1 is not a real NIC, it''s a bit of Xen provided code that has the same interface to the guest OS as would be used for any other NIC. So the guest OS writes a packet using the standard interface for sending packets via a NIC. This netfront bit of Xen (running in DomU) now has the packet and passes it over to the netback bit of Xen (running in Dom0). This also behaves much like the driver for a real NIC, this time presenting a packet it''s just received from it''s ''network'' - so to the bridge code on Dom0, it looks like the packet just arrived on interface vif1.0. The packet still has to go through the bridge code, and either out through a real NIC, through another virtual network to a different DomU, or be handled by Dom0 itself - depending on destination. Receiving a packet is much the same, just the directions are swapped round. Now from previous comments on this list, I believe the code in Dom0 that does all this packet handling is single threaded. In high traffic environments this means it becomes a bottleneck and people have benchmarked some poor results which are blamed on this. How much of an issue it is will vary considerably with the setup. If you have multiple cores dedicated (pinned) for Dom0 use only then it probably isn''t an issue. If you don''t have any cores pinned for Dom0 and/or don''t have many cores, then it''s more likely to be an issue. bear in mind that all your disk I/O has to go through a similar level of abstraction, so there''s a lot going on in Dom0. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books.
Well I played around with network cards and I now samba starts transfer at 128 mb/s then goes to 100 to 90 mb/s then settles down to 65 mb/s. Any way to configure dom0 to get get perfomance. I had found this HowTo: Copper 10 Gigabit NICs on Xen hypervisors With the script below we were able to get a single VM to talk to another VM at slightly better than 3gbit/second. Evidently this is unheard of. Proprietary and vendor specific tweaks are not listed; but we may be tapped for assitance on this subject. Please contact us (or leave a comment) with any questions or concerns. #!/bin/bash ifconfig eth0 txqueuelen 300000 sysctl -w net.core.rmem_max=134217728 # BDP sysctl -w net.core.wmem_max=134217728 # BDP sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728 " # _ _ BDP sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728 " # _ _ BDP sysctl -w net.core.netdev_max_backlog=300000 ethtool -K eth0 gro on On Sun, Jul 29, 2012 at 3:17 PM, Simon Hobson <linux@thehobsons.co.uk>wrote:> Niels Dettenbach wrote: > > >Also, DomU network access relies on a thread in Dom0. It''s well known >> >>> that this can be a significant performance bottleneck. >>> >> >> Sorry, but can you explain this a bit more? This is partly new to me and >> sounds interesting... >> > > I''m no expert, I''m sure someone will pop up and tell me it''s all wrong ! > This is also from the perspective of a PV guest (or at least using PV > drivers) - there are some differences for HV guests. > > DomU network traffic is handled by Dom0. Ie, when DomU "sends" a network > packet, it stuffs it out though what looks like a NIC but is in fact an > interface to code running in Dom0. This thread running in Dom0 then deals > with passing the packet through to the right place - in a bridge setup, > this will mean presenting it to the bridge as though it came from a NIC. > > See this image (under DomU and Dom0 on real network) : > http://new-wiki.xen.org/old-**wiki/xenwiki/**XenNetworkingUsecase.html<http://new-wiki.xen.org/old-wiki/xenwiki/XenNetworkingUsecase.html> > > > The key thing is that eth0 in Dom1 is not a real NIC, it''s a bit of Xen > provided code that has the same interface to the guest OS as would be used > for any other NIC. > So the guest OS writes a packet using the standard interface for sending > packets via a NIC. This netfront bit of Xen (running in DomU) now has the > packet and passes it over to the netback bit of Xen (running in Dom0). This > also behaves much like the driver for a real NIC, this time presenting a > packet it''s just received from it''s ''network'' - so to the bridge code on > Dom0, it looks like the packet just arrived on interface vif1.0. > The packet still has to go through the bridge code, and either out through > a real NIC, through another virtual network to a different DomU, or be > handled by Dom0 itself - depending on destination. > > Receiving a packet is much the same, just the directions are swapped round. > > Now from previous comments on this list, I believe the code in Dom0 that > does all this packet handling is single threaded. In high traffic > environments this means it becomes a bottleneck and people have benchmarked > some poor results which are blamed on this. How much of an issue it is will > vary considerably with the setup. If you have multiple cores dedicated > (pinned) for Dom0 use only then it probably isn''t an issue. If you don''t > have any cores pinned for Dom0 and/or don''t have many cores, then it''s more > likely to be an issue. > bear in mind that all your disk I/O has to go through a similar level of > abstraction, so there''s a lot going on in Dom0. > > > -- > Simon Hobson > > Visit http://www.**magpiesnestpublishing.co.uk/<http://www.magpiesnestpublishing.co.uk/>for books by acclaimed > author Gladys Hobson. Novels - poetry - short stories - ideal as > Christmas stocking fillers. Some available as e-books. > > ______________________________**_________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users