Greetings all, I would like some advice from people how are/were using Xen 3.4.2 - it should be a rather stable release. Dom0 is CentOS 5.5 64bit with Xen 3.4.2 installed with the default settings (minus cpu pinning to Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) and an add-on network card (Intel) with another 4 nics. Currently only one NIC is used for all network access, and as far as networking, the default setings are used - xend-config.sxp: (network-script network-bridge) (vif-script vif-bridge) The questions are: How can I improve the network performance (now all the VM are sharing one bridge): a. creating multiple bridges and assigning a VM (DomU) per bridge b. trying to hide the NICs from Dom0 using something like "pciback hide" - (pointers/example of how one would do this in Centos 5.5 would be highly appreciated...) Also, I have noticed that sometimes - somehow erratical since I can not replicate - it seems that VMs are timing out: while editing in vi it does not respond anymore. Nothing in the logs, of course. And no indication in top either. Also, copying 8 GB of data from one disk to another takes 50 (fifty) minutes !!! - both LVMs attached separately to the DomU as two independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb - /dev/mapper/VG2-VM1_home]. Would it be recommended to have all the storage in one block device - one xvda only - which will have its own LVM structure opposed to multiple xvd''s? Any suggestions on improving the performance in accessing block devices? I am somehow baffled since I have read that XEN is used by ISPs which probably host tens of DomUs on a host machine, and I am struggling to host 7 VMs on a dual quad Xenon box with 48 GB RAM and 3TB RAID5 15K disk storage! Please be gentle since I am rather new to XEN. Thanks, Frank _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
<admin@xenhive.com>
2011-Oct-06 19:12 UTC
RE: [Xen-users] XEN - networking and performance
One thing I would suggest is using RAD10 instead of RAID5. RAID5 is frequently a performance bottleneck. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of fpt stl Sent: Thursday, October 06, 2011 12:44 PM To: xen-users Subject: [Xen-users] XEN - networking and performance Greetings all, I would like some advice from people how are/were using Xen 3.4.2 - it should be a rather stable release. Dom0 is CentOS 5.5 64bit with Xen 3.4.2 installed with the default settings (minus cpu pinning to Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) and an add-on network card (Intel) with another 4 nics. Currently only one NIC is used for all network access, and as far as networking, the default setings are used - xend-config.sxp: (network-script network-bridge) (vif-script vif-bridge) The questions are: How can I improve the network performance (now all the VM are sharing one bridge): a. creating multiple bridges and assigning a VM (DomU) per bridge b. trying to hide the NICs from Dom0 using something like "pciback hide" - (pointers/example of how one would do this in Centos 5.5 would be highly appreciated...) Also, I have noticed that sometimes - somehow erratical since I can not replicate - it seems that VMs are timing out: while editing in vi it does not respond anymore. Nothing in the logs, of course. And no indication in top either. Also, copying 8 GB of data from one disk to another takes 50 (fifty) minutes !!! - both LVMs attached separately to the DomU as two independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb - /dev/mapper/VG2-VM1_home]. Would it be recommended to have all the storage in one block device - one xvda only - which will have its own LVM structure opposed to multiple xvd''s? Any suggestions on improving the performance in accessing block devices? I am somehow baffled since I have read that XEN is used by ISPs which probably host tens of DomUs on a host machine, and I am struggling to host 7 VMs on a dual quad Xenon box with 48 GB RAM and 3TB RAID5 15K disk storage! Please be gentle since I am rather new to XEN. Thanks, Frank _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Yeah. Disks in general are a bottleneck. One of the traps we''ve run into when virtualizing moderately I/O-heavy hosts, is not sizing our disk arrays right. Not in terms of capacity (terabytes) but in spindles. If each physical host normally has 4 dedicated disks, for example, virtualizing 8 of these onto a domU attached to a disk array with 16 drives effectively cuts that ratio from 4:1 down to 2:1. Latency goes up, throughput goes down. I''m finding more and more than sharing CPU resources and memory isn''t the problem, it''s disk. Maybe SSD will get really cheap and we can ditch the old mechanical drives. Or at least I can hope. Jeff From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of admin@xenhive.com Sent: Thursday, October 06, 2011 3:13 PM To: ''xen-users'' Subject: RE: [Xen-users] XEN - networking and performance One thing I would suggest is using RAD10 instead of RAID5. RAID5 is frequently a performance bottleneck. -----Original Message----- From: xen-users-bounces@lists.xensource.com<mailto:xen-users-bounces@lists.xensource.com> [mailto:xen-users-bounces@lists.xensource.com]<mailto:[mailto:xen-users-bounces@lists.xensource.com]> On Behalf Of fpt stl Sent: Thursday, October 06, 2011 12:44 PM To: xen-users Subject: [Xen-users] XEN - networking and performance Greetings all, I would like some advice from people how are/were using Xen 3.4.2 - it should be a rather stable release. Dom0 is CentOS 5.5 64bit with Xen 3.4.2 installed with the default settings (minus cpu pinning to Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) and an add-on network card (Intel) with another 4 nics. Currently only one NIC is used for all network access, and as far as networking, the default setings are used - xend-config.sxp: (network-script network-bridge) (vif-script vif-bridge) The questions are: How can I improve the network performance (now all the VM are sharing one bridge): a. creating multiple bridges and assigning a VM (DomU) per bridge b. trying to hide the NICs from Dom0 using something like "pciback hide" - (pointers/example of how one would do this in Centos 5.5 would be highly appreciated...) Also, I have noticed that sometimes - somehow erratical since I can not replicate - it seems that VMs are timing out: while editing in vi it does not respond anymore. Nothing in the logs, of course. And no indication in top either. Also, copying 8 GB of data from one disk to another takes 50 (fifty) minutes !!! - both LVMs attached separately to the DomU as two independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb - /dev/mapper/VG2-VM1_home]. Would it be recommended to have all the storage in one block device - one xvda only - which will have its own LVM structure opposed to multiple xvd''s? Any suggestions on improving the performance in accessing block devices? I am somehow baffled since I have read that XEN is used by ISPs which probably host tens of DomUs on a host machine, and I am struggling to host 7 VMs on a dual quad Xenon box with 48 GB RAM and 3TB RAID5 15K disk storage! Please be gentle since I am rather new to XEN. Thanks, Frank _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jeff Sturm wrote:>One of the traps we''ve run into when virtualizing moderately >I/O-heavy hosts, is not sizing our disk arrays right. Not in terms >of capacity (terabytes) but in spindles. If each physical host >normally has 4 dedicated disks, for example, virtualizing 8 of these >onto a domU attached to a disk array with 16 drives effectively cuts >that ratio from 4:1 down to 2:1. Latency goes up, throughput goes >down.Not only that, but you also guarantee that the I/O is across different areas of the disk (different partitions/logical volumes) and so you also virtually guarantee a lot more seek activity. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Simon Hobson > Sent: Thursday, October 06, 2011 4:51 PM > > Jeff Sturm wrote: > > >One of the traps we''ve run into when virtualizing moderately I/O-heavy > >hosts, is not sizing our disk arrays right. Not in terms of capacity > >(terabytes) but in spindles. If each physical host normally has 4 > >dedicated disks, for example, virtualizing 8 of these onto a domU > >attached to a disk array with 16 drives effectively cuts that ratio > >from 4:1 down to 2:1. Latency goes up, throughput goes down. > > Not only that, but you also guarantee that the I/O is across different areas of the disk > (different partitions/logical volumes) and so you also virtually guarantee a lot more > seek activity.Very true, yes. In such an environment, sequential disk performance means very little. You need good random I/O throughput and that''s hard to get with mechanical disks, beyond a few thousand iops. 15k disks help, a larger chassis with more disks helps, but that''s just throwing $$$ at the problem and doesn''t really break through the iops barrier. Anyone tried SSD with good results? I''m sure capacity requirements can make it cost-prohibitive for many. Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Unfortunately, at this point I cannot reconfigure the host''s storage. But what would be the course of action taking in consideration the exiting storage configuration? Any tips to improve performance? 50 minutes for 8GB is rather slooooow. And on my networking question, does anybody have anything to comment? Maybe some successful pciback hide solutions for Centos 5.5... Thanks, Frank On Thu, Oct 6, 2011 at 12:43 PM, fpt stl <fptstl@gmail.com> wrote:> Greetings all, > > I would like some advice from people how are/were using Xen 3.4.2 - it > should be a rather stable release. > Dom0 is CentOS 5.5 64bit with Xen 3.4.2 installed with the default settings > (minus cpu pinning to Dom0 and memory setup for 2GB). > There are 2 built-in nics (Broadcom) and an add-on network card (Intel) > with another 4 nics. > Currently only one NIC is used for all network access, and as far as > networking, the default setings are used - xend-config.sxp: > (network-script network-bridge) > (vif-script vif-bridge) > > The questions are: > > How can I improve the network performance (now all the VM are sharing one > bridge): > > a. creating multiple bridges and assigning a VM (DomU) per bridge > b. trying to hide the NICs from Dom0 using something like "pciback hide" - > (pointers/example of how one would do this in Centos 5.5 would be highly > appreciated...) > > > Also, I have noticed that sometimes - somehow erratical since I can not > replicate - it seems that VMs are timing out: while editing in vi it does > not respond anymore. > Nothing in the logs, of course. And no indication in top either. Also, > copying 8 GB of data from one disk to another takes 50 (fifty) minutes !!! - > both LVMs attached > separately to the DomU as two independent volumes [xvda > - /dev/mapper/VG1-VM1 and xvdb - /dev/mapper/VG2-VM1_home]. > > Would it be recommended to have all the storage in one block device - one > xvda only - which will have its own LVM structure opposed to multiple xvd''s? > > Any suggestions on improving the performance in accessing block devices? > > I am somehow baffled since I have read that XEN is used by ISPs which > probably host tens of DomUs on a host machine, and I am struggling to host 7 > VMs on a dual quad Xenon box with 48 GB RAM and 3TB RAID5 15K disk storage! > > Please be gentle since I am rather new to XEN. > > Thanks, > > Frank > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 7, 2011 at 11:12 AM, Jeff Sturm <jeff.sturm@eprize.com> wrote:> > -----Original Message----- > > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > > bounces@lists.xensource.com] On Behalf Of Simon Hobson > > Sent: Thursday, October 06, 2011 4:51 PM > > > > Jeff Sturm wrote: > > > > >One of the traps we''ve run into when virtualizing moderately I/O-heavy > > >hosts, is not sizing our disk arrays right. Not in terms of capacity > > >(terabytes) but in spindles. If each physical host normally has 4 > > >dedicated disks, for example, virtualizing 8 of these onto a domU > > >attached to a disk array with 16 drives effectively cuts that ratio > > >from 4:1 down to 2:1. Latency goes up, throughput goes down. > > > > Not only that, but you also guarantee that the I/O is across different > areas of the disk > > (different partitions/logical volumes) and so you also virtually > guarantee a lot more > > seek activity. > > Very true, yes. In such an environment, sequential disk performance means > very little. You need good random I/O throughput and that''s hard to get > with mechanical disks, beyond a few thousand iops. 15k disks help, a larger > chassis with more disks helps, but that''s just throwing $$$ at the problem > and doesn''t really break through the iops barrier. > > Anyone tried SSD with good results? I''m sure capacity requirements can > make it cost-prohibitive for many. > > Jeff > >I''m running 3/4 TB of SSDs for my additional disks in my XCP cloud shared out as an iSCSI SR. I tried SSDs as storage for disk images under Xen and there were some strange issues so I''m not quite ready to put OS images of the VMs on it. I''ll report back when I have more info. Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jeff Sturm wrote:>Anyone tried SSD with good results? I''m sure capacity requirements >can make it cost-prohibitive for many.Interesting that a product I recall from ''some years ago'' doesn''t seem to have popped up again - or perhaps it has and I never noticed since I''m not into high end storage. This device looked to the host like a standard SCSI disk, but internally it had a load of DRAM, a small (2 1/2" ?) disk, a controller, and a small battery. Basically it was a big RAM disk with a SCSI interface, but when the power went off it would write everything to disk. I suspect it probably had a continuous process of writing dirty blocks to disk. Mind you, I suppose RAM does still cost somewhat more than disk. fpt stl wrote:>> Also, copying 8 GB of data from one disk to another takes 50 >>(fifty) minutes !!! - both LVMs attached separately to the DomU >>as two independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb - >>/dev/mapper/VG2-VM1_home].>Unfortunately, at this point I cannot reconfigure the host''s >storage. But what would be the course of action taking in >consideration the exiting storage configuration? Any tips to improve >performance? 50 minutes for 8GB is rather slooooow.What is most likely happening here is that while your OS sees the storage as two devices, in fact they are on the same disk (or set of disks). So the copy becomes : read a bit - seek - write a bit - seek write some metadata - seek - read a bit - seek - write a bit ... That''s a lot of seeking and seeks kill performance really badly. It also depend son what that 8G is. A small number of big files stands a half decent chance of using some write cache to buffer some of the seeks, but if it''s lots of small files then there''ll be a huge amount of filesystem metadata to be updated as well. And it also depend on what you are using for the copy. Some programs (such as dd and cpio) allow you to set a blocksize. Increasing this as far as your memory allows will help as that would mean reading a big chunk of data before seeking elsewhere to write it. Less seeks = better performance. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for your reply. The copy is a plain cp and the files are around 100k - 200k each, with some gzip archives as well. So, if I understand correctly, if a DomU (VM) has attached several disks/xvd''s (LVs in Dom0 storage space) then the Dom0 or Xen hypervisor is responsible for moving the data between the DomU disks. Alternately, if the DomU has just one xvd (a larger LM in Dom0 storage space) then only DomU''s allocated resources will be used. The second sounds better in terms or resource allocation at least. Converting several LVs which are attached to one DomU VM into one larger LV might create some performance improvement. Please correct me if I am wrong. Frank -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Simon Hobson Sent: Friday, October 07, 2011 2:47 PM To: xen-users@lists.xensource.com Subject: [Xen-users] Re: XEN - networking and performance Jeff Sturm wrote:>Anyone tried SSD with good results? I''m sure capacity requirements can>make it cost-prohibitive for many.Interesting that a product I recall from ''some years ago'' doesn''t seem to have popped up again - or perhaps it has and I never noticed since I''m not into high end storage. This device looked to the host like a standard SCSI disk, but internally it had a load of DRAM, a small (2 1/2" ?) disk, a controller, and a small battery. Basically it was a big RAM disk with a SCSI interface, but when the power went off it would write everything to disk. I suspect it probably had a continuous process of writing dirty blocks to disk. Mind you, I suppose RAM does still cost somewhat more than disk. fpt stl wrote:>> Also, copying 8 GB of data from one disk to another takes 50>>(fifty) minutes !!! - both LVMs attached separately to the DomU as two>>independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb ->>/dev/mapper/VG2-VM1_home].>Unfortunately, at this point I cannot reconfigure the host''s storage.>But what would be the course of action taking in consideration the>exiting storage configuration? Any tips to improve performance? 50>minutes for 8GB is rather slooooow.What is most likely happening here is that while your OS sees the storage as two devices, in fact they are on the same disk (or set of disks). So the copy becomes : read a bit - seek - write a bit - seek write some metadata - seek - read a bit - seek - write a bit ... That''s a lot of seeking and seeks kill performance really badly. It also depend son what that 8G is. A small number of big files stands a half decent chance of using some write cache to buffer some of the seeks, but if it''s lots of small files then there''ll be a huge amount of filesystem metadata to be updated as well. And it also depend on what you are using for the copy. Some programs (such as dd and cpio) allow you to set a blocksize. Increasing this as far as your memory allows will help as that would mean reading a big chunk of data before seeking elsewhere to write it. Less seeks = better performance. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Jeff Sturm wrote: > > >Anyone tried SSD with good results? I''m sure capacity requirementscan> >make it cost-prohibitive for many. > > Interesting that a product I recall from ''some years ago'' doesn''t seemto have> popped up again - or perhaps it has and I never noticed since I''m notinto> high end storage. This device looked to the host like a standard SCSIdisk, but> internally it had a load of DRAM, a small (2 1/2" ?) disk, acontroller, and a> small battery. > Basically it was a big RAM disk with a SCSI interface, but when thepower> went off it would write everything to disk. I suspect it probably hada> continuous process of writing dirty blocks to disk. > Mind you, I suppose RAM does still cost somewhat more than disk. >http://www.seagate.com/www/en-au/products/laptops/laptop-hdd/ It''s a 500GB 2.5" disk with 4GB of SSD used as a cache. The drive handles the caching internally so the OS just sees a disk. I have one in my laptop (running Windows) and it seems to speed things up a great deal, although I don''t know how much of that is just that it''s a 7200 rather than a 5400 RPM disk. I don''t think there are any such things in the ''enterprise grade'' product space though. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
<admin@xenhive.com>
2011-Oct-08 01:27 UTC
RE: [Xen-users] XEN - networking and performance
We''ve used SSD drives as caching drives (L2ARC) in ZFS SAN and NAS solutions. It is a cost effective way to dramatically improve the performance of the ZFS systems. We usually toss 300GB of SSD drives into the storage systems for caching. SSD is cheap compared to RAM. Here is a link: http://www.zfsbuild.com/2010/07/30/testing-the-l2arc/ -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jeff Sturm Sent: Friday, October 07, 2011 1:13 PM To: Simon Hobson; xen-users@lists.xensource.com Subject: RE: [Xen-users] XEN - networking and performance> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Simon Hobson > Sent: Thursday, October 06, 2011 4:51 PM > > Jeff Sturm wrote: > > >One of the traps we''ve run into when virtualizing moderately I/O-heavy > >hosts, is not sizing our disk arrays right. Not in terms of capacity > >(terabytes) but in spindles. If each physical host normally has 4 > >dedicated disks, for example, virtualizing 8 of these onto a domU > >attached to a disk array with 16 drives effectively cuts that ratio > >from 4:1 down to 2:1. Latency goes up, throughput goes down. > > Not only that, but you also guarantee that the I/O is across differentareas of the disk> (different partitions/logical volumes) and so you also virtually guaranteea lot more> seek activity.Very true, yes. In such an environment, sequential disk performance means very little. You need good random I/O throughput and that''s hard to get with mechanical disks, beyond a few thousand iops. 15k disks help, a larger chassis with more disks helps, but that''s just throwing $$$ at the problem and doesn''t really break through the iops barrier. Anyone tried SSD with good results? I''m sure capacity requirements can make it cost-prohibitive for many. Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing in e-mail? fpt stl wrote:>>What is most likely happening here is that while your OS sees the >>storage as two devices, in fact they are on the same disk (or set >>of disks). So the copy becomes : >> >>read a bit - seek - write a bit - seek write some metadata - seek - >>read a bit - seek - write a bit ... >> >>That''s a lot of seeking and seeks kill performance really badly. >> >> >> >>It also depend son what that 8G is. A small number of big files >>stands a half decent chance of using some write cache to buffer >>some of the seeks, but if it''s lots of small files then there''ll be >>a huge amount of filesystem metadata to be updated as well. >> >> >> >>And it also depend on what you are using for the copy. Some >>programs (such as dd and cpio) allow you to set a blocksize. >>Increasing this as far as your memory allows will help as that >>would mean reading a big chunk of data before seeking elsewhere to >>write it. Less seeks = better performance.>The copy is a plain cp and the files are around 100k - 200k each, >with some gzip archives as well. > >So, if I understand correctly, if a DomU (VM) has attached several >disks/xvd''s (LVs in Dom0 storage space) then the Dom0 or Xen >hypervisor is responsible for moving the data between the DomU disks. > >Alternately, if the DomU has just one xvd (a larger LM in Dom0 >storage space) then only DomU''s allocated resources will be used.NO Dom0 does NOT handle transfers for DomU - at least not in the way that you mean. It does not matter whether you export a whole disk and let DomU partition it, or partition it in Dom0 and export the partitions, or even use file based volumes in Dom0. In all cases, DomU does the data move/copy. Ie whatever tool you use in DomU will read the source data into memory and write it out to the destination. Even if it does a device-device transfer*, it will still be read into buffers in DomU memory and written out again. The part Dom0 plays is to "pretend" to be a disk. So when a DomU reads a block of data, a thread on Dom0 translates that request into a read request for the appropriate location on the appropriate device, reads it, and passes it to DomU. Similarly, when DomU writes data, Dom0 simply translates the location and writes the data out. For raw disk devices (eg whole disks, whole partitions, or whole LVM volumes) then the mapping is a simple 1-1 map. Using sparse file storage, there''s a bit more to it as the Dom0 filesystem will need to keep track of which parts of the virtual disk file exist and add extents as needed. So the process to copy a block of data from one volume to another in a DomU is : Application does read. Filesystem/kernel/virtual device drivers etc translate this into a read request for a virtual block device. The VBD drivers interfaces with it''s counterpart in Dom0. The virtual block storage code in Dom0 translates the request into a request for the appropriate device/partition/volume/file and reads the block. The block is passed up to the VBD in DomU The data is then presented up through the kernel/filesystem/etc code to the application. Writing a block is pretty much the reverse of the above. Note that except in special cases, a hypervisor doesn''t (and can''t) have intimate knowledge of the guest filesystems, and especially the filesystem state. That''s not to say it can''t be done (Microsoft do things like that in their latest systems) - but it cannot be done in the general case as it needs very intimate knowledge and cooperation between hypervisor and guest. Thus a copy operation always involves the data being read into memory in the guest and then written out again. * I believe there is a function where a program can ask the kernel etc to copy data directly from source to destination without using buffers in the program. This is still a read-into-memory then write-to-device operation. James Harper wrote:> > Interesting that a product I recall from ''some years ago'' doesn''t seem >to have >> popped up again - or perhaps it has and I never noticed since I''m not >into >> high end storage. This device looked to the host like a standard SCSI >disk, but >> internally it had a load of DRAM, a small (2 1/2" ?) disk, a >controller, and a >> small battery. >> Basically it was a big RAM disk with a SCSI interface, but when the >power >> went off it would write everything to disk. I suspect it probably had >a >> continuous process of writing dirty blocks to disk. >> Mind you, I suppose RAM does still cost somewhat more than disk. >> > >http://www.seagate.com/www/en-au/products/laptops/laptop-hdd/ > >It''s a 500GB 2.5" disk with 4GB of SSD used as a cache. The drive >handles the caching internally so the OS just sees a disk. I have one in >my laptop (running Windows) and it seems to speed things up a great >deal, although I don''t know how much of that is just that it''s a 7200 >rather than a 5400 RPM disk.It''s still a disk with some cache in front - and so still subject to seek delays if your working set is larger than the cache, and the cache is still SSD which is slower (especially on writes) than dynamic RAM. The product I recall was a true ram disk - with effectively zero seek times regardless of working set size. The one I recall was also a) not that large, and b) eye wateringly expensive though. I suspect such things exist for those that need the performance and will pay for them. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, On Sat, Oct 8, 2011 at 2:46 AM, Simon Hobson <linux@thehobsons.co.uk> wrote:> The product I recall was a true ram disk - with effectively zero seek > times regardless of working set size. > > The one I recall was also a) not that large, and b) eye wateringly > expensive though. > I suspect such things exist for those that need the performance and will > pay for them. >Sounds like Fusion-IO, perhaps? By far the fastest but most expensive per/GB that I''ve heard of, anyhow. "IBM’s project Quicksilver, based on Fusion-io technology, showed that solid-state technology in 2008 could deliver the fastest performance of its time: 1 million IOPS." Mark _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Salutations, From the xen-users list:> I would like some advice from people how are/were using Xen 3.4.2 - > it should be a rather stable release. Dom0 is CentOS 5.5 64bit with > Xen 3.4.2 installed with the default settings (minus cpu pinning to > Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) > and an add-on network card (Intel) with another 4 nics. Currently > only one NIC is used for all network access, and as far as > networking, the default setings are used - xend-config.sxp: > (network-script network-bridge) (vif-script vif-bridge) > > The questions are: > > How can I improve the network performance (now all the VM are sharing > one bridge): > > a. creating multiple bridges and assigning a VM (DomU) per bridge > b. trying to hide the NICs from Dom0 using something like "pciback > hide" - (pointers/example of how one would do this in Centos 5.5 > would be highly appreciated...)Xen Networking has been a thorn in my eye and a similar question has been with me for a long time now. So prepare, for this response contains rage. Xen networking has room for many different approaches, yet the best thing about its scripts is that they are not mandatory to use. You can fully adjust the scripts to your needs or even replace them with your own. You can find modifications on forums and blogs although most of them just seem to be copies of few suggestions made by few people. Logical, because quite frankly it''s a pain to grope what the Xen scripts and udev rules really do, let alone grok most of what they do out of the box. Right now I just care about creating my ideal networking solution, i.e. routing, bridging and firewall stuff for vms with different roles. I am running Xen 4.1.2-rc3-pre non-professionally on a quad core single cpu 1U server. with 4 hard drives in RAID10 configuration. The server has one usable Ethernet port with multiple globally routable IPs. I can''t use the other ethernet port; the server has no IPMI and the ISP declines use of two ports by one system because the data center is a no smoking zone for both humans and routers and switches. So the highest priority is to reach dom0 from the Internet and therefore my grub has fallback options, one of which is a boot to Linux with no Xen. In turn this means that dom0''s networking boot scripts may not depend on Xen at all, and Xen may not change networking in any way unless specified. My dom0 is a minimal system that only controls vms and networking. I want dom0 to be small and simple so the obvious choice is Arch Linux. Dom0 should be separated from the domUs in that the domUs cannot reach dom0 and one domU (domN) should do all networking for the other domUs. I tried to use xl with xen4 for a while but due to bugs and missing features I had to go back to xm and xend. This is where the fun begins. In the past I used xend with network-bridge and for some strange reason (voodoo probably) I blindly accepted that script in the past and blamed myself for not appreciating it. But let''s be blunt and honest: the scripts, in particular the script that *modifies dom0 networking during xend startup* is the biggest piece of sh!# idea I have ever seen in Xen. It creates bridges, takes eth0 down, tortures dom0 with occult ip addr/route, brctl, sysctl and iptables awk/sed manipulations and then it has you looking at your screen yearning for the moment that ping timeouts become ping replies, telling you that your box is reachable again. This script is a malevolent demon from the sewers of Norman the Golgothan and the worst part is that network-bridge is also still the recommended default! On the more positive side there was a fantastic update in Xen 4 where network-bridge changed a bit so that "it will not bring a bridge up if one already exists". Whoever wrote it should get a corporate medal for that and then a long vacation to a deserted island with an MSX II and no floppies. How can this even be approved by Xen''s senior project manager, or is that a vacant position? It surely seems so. Xen''s /etc/xen/scripts (another design fail, why not /usr?) and udev scripts are confusing ad-hoc bloatware routines and are not transparent at all. With the current xen4 I saw the premature advice to more or less ''prepare for migration from xm to xl''. Yet, xl supports less and is conflicting: there is no vifname, no ''xl new/delete'', no more python, no relocation and suddenly there is a conflict between ''xm start domain'' versus ''xl start /etc/xen/domain''. So new features emerge, adding to the confusion of the end user, while old problems are not being fixed properly. I wonder why, especially because it does not seem that xm and xend are the broken parts that need to be replaced by an unstable interface. What needs attention first and foremost are two things, first of which is real and wise effort into one simple, minimal script that just handles the minimum in a transparent way e.g. control the hypervisor, manage vms, manage the backend. Of course networking can be done on domain start too, but this has happen in an entirely different way from what it does and how it does it. This is so important because it gives more control to the user that runs Xen. It''s also a good moment to build in proper and mature support for IPv6. Secondly, the website and documentation should be cleaned up and revised where appropriate. The current situation is a mess that has a much too steep and incompatible learning curve right now - for example, a bridge should just not be named eth0 and a physical device should not be renamed at all. It''s fundamentally wrong, stupid, mad as hell and a PR failure for Xen to do it this way out of the box. No matter how often and detailed it has been documented on the website. I propose something like the following for xen networking: * Xen will not manipulate non-xen devices or a firewall under any circumstance, it might only add or substract routes and/or rules from the routing tables, * Allow for networking configuration per domU. For example let networking per device be nat, routed, bridged or custom, where all name the interface and bring it up; nat only adds the ip to the routing table; routed could be an array of routes and rules that need to be added or subtracted from various routing tables and it might support proxyarp; bridged turns off arp, sets the mac on the vif and then adds the interface to a bridge that should already be created by the user; and custom is a custom set of unmanaged commands after creating and destroying a domain. I am aware that this can already be done with Xen. However, that process is quite arbitrary and it does things no one asked for. So one has to read the scripts. For example with the iptables part of vif-bridge. It is not handled transparently, it is quite arbitrary and it automatically executes for all vms that are being started. This leads you to wonder what more it does without you knowing it... So, with that off my chest and the second line of my network-bridge being the words "exit 0" Xen lets my dom0 configuration alone like it is supposed to do. While KVM is becoming a ''next cool thing'' for many people I would still prefer a separate hypervisor so now the fat just has to be removed from Xen. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
D. Duckworth
2011-Oct-08 19:07 UTC
[Xen-devel] Re: [Xen-users] XEN - networking and performance
Salutations, From the xen-users list:> I would like some advice from people how are/were using Xen 3.4.2 - > it should be a rather stable release. Dom0 is CentOS 5.5 64bit with > Xen 3.4.2 installed with the default settings (minus cpu pinning to > Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) > and an add-on network card (Intel) with another 4 nics. Currently > only one NIC is used for all network access, and as far as > networking, the default setings are used - xend-config.sxp: > (network-script network-bridge) (vif-script vif-bridge) > > The questions are: > > How can I improve the network performance (now all the VM are sharing > one bridge): > > a. creating multiple bridges and assigning a VM (DomU) per bridge > b. trying to hide the NICs from Dom0 using something like "pciback > hide" - (pointers/example of how one would do this in Centos 5.5 > would be highly appreciated...)Xen Networking has been a thorn in my eye and a similar question has been with me for a long time now. So prepare, for this response contains rage. Xen networking has room for many different approaches, yet the best thing about its scripts is that they are not mandatory to use. You can fully adjust the scripts to your needs or even replace them with your own. You can find modifications on forums and blogs although most of them just seem to be copies of few suggestions made by few people. Logical, because quite frankly it''s a pain to grope what the Xen scripts and udev rules really do, let alone grok most of what they do out of the box. Right now I just care about creating my ideal networking solution, i.e. routing, bridging and firewall stuff for vms with different roles. I am running Xen 4.1.2-rc3-pre non-professionally on a quad core single cpu 1U server. with 4 hard drives in RAID10 configuration. The server has one usable Ethernet port with multiple globally routable IPs. I can''t use the other ethernet port; the server has no IPMI and the ISP declines use of two ports by one system because the data center is a no smoking zone for both humans and routers and switches. So the highest priority is to reach dom0 from the Internet and therefore my grub has fallback options, one of which is a boot to Linux with no Xen. In turn this means that dom0''s networking boot scripts may not depend on Xen at all, and Xen may not change networking in any way unless specified. My dom0 is a minimal system that only controls vms and networking. I want dom0 to be small and simple so the obvious choice is Arch Linux. Dom0 should be separated from the domUs in that the domUs cannot reach dom0 and one domU (domN) should do all networking for the other domUs. I tried to use xl with xen4 for a while but due to bugs and missing features I had to go back to xm and xend. This is where the fun begins. In the past I used xend with network-bridge and for some strange reason (voodoo probably) I blindly accepted that script in the past and blamed myself for not appreciating it. But let''s be blunt and honest: the scripts, in particular the script that *modifies dom0 networking during xend startup* is the biggest piece of sh!# idea I have ever seen in Xen. It creates bridges, takes eth0 down, tortures dom0 with occult ip addr/route, brctl, sysctl and iptables awk/sed manipulations and then it has you looking at your screen yearning for the moment that ping timeouts become ping replies, telling you that your box is reachable again. This script is a malevolent demon from the sewers of Norman the Golgothan and the worst part is that network-bridge is also still the recommended default! On the more positive side there was a fantastic update in Xen 4 where network-bridge changed a bit so that "it will not bring a bridge up if one already exists". Whoever wrote it should get a corporate medal for that and then a long vacation to a deserted island with an MSX II and no floppies. How can this even be approved by Xen''s senior project manager, or is that a vacant position? It surely seems so. Xen''s /etc/xen/scripts (another design fail, why not /usr?) and udev scripts are confusing ad-hoc bloatware routines and are not transparent at all. With the current xen4 I saw the premature advice to more or less ''prepare for migration from xm to xl''. Yet, xl supports less and is conflicting: there is no vifname, no ''xl new/delete'', no more python, no relocation and suddenly there is a conflict between ''xm start domain'' versus ''xl start /etc/xen/domain''. So new features emerge, adding to the confusion of the end user, while old problems are not being fixed properly. I wonder why, especially because it does not seem that xm and xend are the broken parts that need to be replaced by an unstable interface. What needs attention first and foremost are two things, first of which is real and wise effort into one simple, minimal script that just handles the minimum in a transparent way e.g. control the hypervisor, manage vms, manage the backend. Of course networking can be done on domain start too, but this has happen in an entirely different way from what it does and how it does it. This is so important because it gives more control to the user that runs Xen. It''s also a good moment to build in proper and mature support for IPv6. Secondly, the website and documentation should be cleaned up and revised where appropriate. The current situation is a mess that has a much too steep and incompatible learning curve right now - for example, a bridge should just not be named eth0 and a physical device should not be renamed at all. It''s fundamentally wrong, stupid, mad as hell and a PR failure for Xen to do it this way out of the box. No matter how often and detailed it has been documented on the website. I propose something like the following for xen networking: * Xen will not manipulate non-xen devices or a firewall under any circumstance, it might only add or substract routes and/or rules from the routing tables, * Allow for networking configuration per domU. For example let networking per device be nat, routed, bridged or custom, where all name the interface and bring it up; nat only adds the ip to the routing table; routed could be an array of routes and rules that need to be added or subtracted from various routing tables and it might support proxyarp; bridged turns off arp, sets the mac on the vif and then adds the interface to a bridge that should already be created by the user; and custom is a custom set of unmanaged commands after creating and destroying a domain. I am aware that this can already be done with Xen. However, that process is quite arbitrary and it does things no one asked for. So one has to read the scripts. For example with the iptables part of vif-bridge. It is not handled transparently, it is quite arbitrary and it automatically executes for all vms that are being started. This leads you to wonder what more it does without you knowing it... So, with that off my chest and the second line of my network-bridge being the words "exit 0" Xen lets my dom0 configuration alone like it is supposed to do. While KVM is becoming a ''next cool thing'' for many people I would still prefer a separate hypervisor so now the fat just has to be removed from Xen. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2011-Oct-08 23:01 UTC
[Xen-devel] RE: [Xen-users] XEN - networking and performance
> the scripts, in particular the script that *modifies dom0 networkingduring> xend startup* is the biggest piece of sh!# idea I have ever seen inXen. It> creates bridges, takes eth0 down, tortures dom0 with occult ipaddr/route,> brctl, sysctl and iptables awk/sed manipulations and then it has youlooking at> your screen yearning for the moment that ping timeouts become ping > replies, telling you that your box is reachable again. This script isa malevolent> demon from the sewers of Norman the Golgothan and the worst part isthat> network-bridge is also still the recommended default!That script was designed to make the network look like it was before you installed xen. Anyone with anything beyond a basic single-port single-vlan setup comments that line out and creates their own bridges.> It surely seems so. Xen''s /etc/xen/scripts (another design fail, whynot /usr?)> and udev scripts are confusing ad-hoc bloatware routines and are not > transparent at all. With the current xen4 I saw the premature adviceto more> or less ''prepare for migration from xm to xl''. Yet, xl supports lessand is> conflicting: there is no vifname, no ''xl new/delete'', no more python,no> relocation and suddenly there is a conflict between ''xm start domain''versus> ''xl start /etc/xen/domain''. >This is an open source project. Please feel free to submit your own scripts. I''d definitely like to see something that didn''t create firewall rules as I don''t even want iptables loaded on my xen systems. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
D. Duckworth wrote: A vitriolic rant !>Right now I just care about creating my ideal networking solution, i.e. >routing, bridging and firewall stuff for vms with different roles....>... and Xen may not change networking in any way unless specified.All that is trivial to do. The network-script is (I believe) deprecated anyway as the developers realise it''s not very good AND OS native tools for things like managing bridges have improved somewhat. It probably made sense when they were first written, and making bridges and/or flexible setups that can survive booting in or out of Xen, required more script voodoo than most users could muster. It''s one thing to say these scripts are rubbish, but you have to realise the historical context from when they were written. So comment out any network-script in your Xen config. You are now no longer using the Xen supplied scripts for setting up your host networking. In your host config, get it to create the bridge - this is trivially easy in Debian and multiple posts have been made here recently. This is an extract from my own system at home : auto eth0 iface eth0 inet static bridge_ports peth0 address 192.168.nn.nn netmask 255.255.255.0 gateway 192.168.nn.nn You see, that really is all it takes to configure a bridge in Debian these days ! My preference is to have Udev name my physical interfaces as things like pethint, pethext, and so on. This is one simple edit in something like /etc/udev/rules.d/<something>persistent-net-rules where you simply change "eth<n>" for the interface to something else. You don''t have to do this, but IMO it makes things much easier as you don''t have to keep remembering whether eth0 is the outside, inside, something else network ! These two changes will mean you have a network in Dom0 that works the same whether booted natively or with Xen, where the Dom0 uses one (or more) bridge(s) for it''s own networking, and the physical interface(s) are connected to the bridge(s) you want. Now, if you want a DomU to act as a router for the rest of the network, that''s easy too - I do that at home. There are two ways of doing it. 1) You can use pci passthrough to hide a NIC from the host and make it available natively to the guest. Then just configure the guest to do whatever you want with the traffic. 2) You can create another bridge but don''t configure an IP address on it in Dom0. Connect the guest to this bridge as well as the other internal networks, and it can route traffic in the same way. This is logically the same as option 1 but having installed a (software) switch between the guest and the outside world. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Konrad Rzeszutek Wilk
2011-Oct-10 17:03 UTC
Re: [Xen-devel] Re: [Xen-users] XEN - networking and performance
On Sat, Oct 08, 2011 at 07:07:01PM +0000, D. Duckworth wrote:> Salutations, > > From the xen-users list: > > I would like some advice from people how are/were using Xen 3.4.2 - > > it should be a rather stable release. Dom0 is CentOS 5.5 64bit with > > Xen 3.4.2 installed with the default settings (minus cpu pinning to > > Dom0 and memory setup for 2GB). There are 2 built-in nics (Broadcom) > > and an add-on network card (Intel) with another 4 nics. Currently > > only one NIC is used for all network access, and as far as > > networking, the default setings are used - xend-config.sxp: > > (network-script network-bridge) (vif-script vif-bridge) > > > > The questions are: > > > > How can I improve the network performance (now all the VM are sharing > > one bridge): > > > > a. creating multiple bridges and assigning a VM (DomU) per bridge > > b. trying to hide the NICs from Dom0 using something like "pciback > > hide" - (pointers/example of how one would do this in Centos 5.5 > > would be highly appreciated...) > > Xen Networking has been a thorn in my eye and a similar question has > been with me for a long time now. So prepare, for this response contains > rage. > > Xen networking has room for many different approaches, yet the best > thing about its scripts is that they are not mandatory to use. You can > fully adjust the scripts to your needs or even replace them with your > own. You can find modifications on forums and blogs although most of > them just seem to be copies of few suggestions made by few people. > Logical, because quite frankly it''s a pain to grope what the Xen > scripts and udev rules really do, let alone grok most of what they > do out of the box. > > Right now I just care about creating my ideal networking solution, i.e. > routing, bridging and firewall stuff for vms with different roles. I am > running Xen 4.1.2-rc3-pre non-professionally on a quad core single cpu > 1U server. with 4 hard drives in RAID10 configuration. The server has > one usable Ethernet port with multiple globally routable IPs. I can''t > use the other ethernet port; the server has no IPMI and the ISP declines > use of two ports by one system because the data center is a no smoking > zone for both humans and routers and switches. > > So the highest priority is to reach dom0 from the Internet and > therefore my grub has fallback options, one of which is a boot to Linux > with no Xen. In turn this means that dom0''s networking boot scripts > may not depend on Xen at all, and Xen may not change networking in any > way unless specified. My dom0 is a minimal system that only controls vms > and networking. I want dom0 to be small and simple so the obvious > choice is Arch Linux. Dom0 should be separated from the domUs in that > the domUs cannot reach dom0 and one domU (domN) should do all > networking for the other domUs. > > I tried to use xl with xen4 for a while but due to bugs and missing > features I had to go back to xm and xend. This is where the fun > begins. In the past I used xend with network-bridge and for some strange > reason (voodoo probably) I blindly accepted that script in the past and > blamed myself for not appreciating it. But let''s be blunt and honest: > the scripts, in particular the script that *modifies dom0 networking > during xend startup* is the biggest piece of sh!# idea I have ever seen > in Xen. It creates bridges, takes eth0 down, tortures dom0 with occult > ip addr/route, brctl, sysctl and iptables awk/sed manipulations and then > it has you looking at your screen yearning for the moment that ping > timeouts become ping replies, telling you that your box is reachable > again. This script is a malevolent demon from the sewers of Norman the > Golgothan and the worst part is that network-bridge is also still the<laughs>> recommended default!Can you point me to where it mentions that please?> > On the more positive side there was a fantastic update in Xen 4 where > network-bridge changed a bit so that "it will not bring a bridge up if > one already exists". Whoever wrote it should get a corporate medal for > that and then a long vacation to a deserted island with an MSX II and > no floppies. How can this even be approved by Xen''s senior project > manager, or is that a vacant position?We realized that the networking setup is quite complex and would be best left in the hands of the admins. The problem is that..> > It surely seems so. Xen''s /etc/xen/scripts (another design fail, why > not /usr?) and udev scripts are confusing ad-hoc bloatware routines and > are not transparent at all. With the current xen4 I saw the premature > advice to more or less ''prepare for migration from xm to xl''. Yet, xl > supports less and is conflicting: there is no vifname, no ''xl > new/delete'', no more python, no relocation and suddenly there is a > conflict between ''xm start domain'' versus ''xl start /etc/xen/domain''. > > So new features emerge, adding to the confusion of the end user, while > old problems are not being fixed properly. I wonder why, especially > because it does not seem that xm and xend are the broken parts that > need to be replaced by an unstable interface. > > What needs attention first and foremost are two things, first of which > is real and wise effort into one simple, minimal script that just > handles the minimum in a transparent way e.g. control the hypervisor, > manage vms, manage the backend. Of course networking can be done on > domain start too, but this has happen in an entirely different way from > what it does and how it does it. This is so important because it gives > more control to the user that runs Xen. It''s also a good moment to > build in proper and mature support for IPv6. > > Secondly, the website and documentation should be cleaned up and > revised where appropriate. The current situation is a mess that has > a much too steep and incompatible learning curve right now - for > example, a bridge should just not be named eth0 and a physical device > should not be renamed at all. It''s fundamentally wrong, stupid, mad as > hell and a PR failure for Xen to do it this way out of the box. No > matter how often and detailed it has been documented on the website... the documentation and setup is sometimes quite hard. BTW, we are going to do on Oct 26th a Documentation Day to clean up some of this mess. Would you be intereted in helping along - perhaps in the networking Wiki?> > I propose something like the following for xen networking: > > * Xen will not manipulate non-xen devices or a firewall under any > circumstance, it might only add or substract routes and/or rules from > the routing tables,Uh, what is ''non-xen'' devices? Like bridges?> * Allow for networking configuration per domU. For example let > networking per device be nat, routed, bridged or custom, where > all name the interface and bring it up; nat only adds the ip to the > routing table; routed could be an array of routes and rules that need > to be added or subtracted from various routing tables and it might > support proxyarp; bridged turns off arp, sets the mac on the vif and > then adds the interface to a bridge that should already be created by > the user; and custom is a custom set of unmanaged commands after > creating and destroying a domain.You lost me. <sigh> I am using a bridge configuration and just do: auto lo iface lo inet loopback auto switch iface switch inet static address 192.168.101.16 netmask 255.255.255.0 gateway 192.168.101.1 bridge_ports eth2 And just use that ''bridge=switch'' in all my configuration. And that seems to work just fine - wouldn''t that be best way of providing the first network setup to users? I would think the majority of folks do something akin to this?> > I am aware that this can already be done with Xen. However, that > process is quite arbitrary and it does things no one asked for. So one > has to read the scripts. For example with the iptables part of > vif-bridge. It is not handled transparently, it is quite arbitrary and > it automatically executes for all vms that are being started. This leads > you to wonder what more it does without you knowing it... > > So, with that off my chest and the second line of my network-bridge > being the words "exit 0" Xen lets my dom0 configuration alone > like it is supposed to do. While KVM is becoming a ''next cool > thing'' for many people I would still prefer a separate hypervisor so now > the fat just has to be removed from Xen. >I am all for removing fat. Do you have links to some of particularly bad Wiki pages that should be heavily audited? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 7 October 2011 19:17, fpt stl <fptstl@gmail.com> wrote:> And on my networking question, does anybody have anything to comment? Maybe > some successful pciback hide solutions for Centos 5.5...On Centos5.x the pciback driver is a module rather than built into the kernel, therefore you can''t use pciback.hide on the kernel command line However you can manually bind the devices to pciback after the dom0 is booted, then pass them to the domU e.g. modprobe pciback passthrough=1 SLOTS=(0000:09:00.0 0000:09:01.0 0000:09:03.0) for i in ${SLOTS[@]}; do echo -n $i > /sys/bus/pci/drivers/pciback/new_slot echo -n $i > /sys/bus/pci/drivers/pciback/bind done xm pci-list-assignable-devices and then in your domU.cfg pci = [ ''09:00.0'', ''09:01.0'', ''09:03.0''] (I might be slightly mixing my Centos5.x and Fedora16 syntax above, poke me if you can''t get it working) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andy Burns
2011-Oct-10 20:21 UTC
Re: [Xen-devel] Re: [Xen-users] XEN - networking and performance
On 10 October 2011 18:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:> You lost me. <sigh> I am using a bridge configuration and just > do: > > auto lo > iface lo inet loopback > > auto switch > iface switch inet static > address 192.168.101.16 > netmask 255.255.255.0 > gateway 192.168.101.1 > bridge_ports eth2 > > And just use that ''bridge=switch'' in all my configuration. And that > seems to work just fine - wouldn''t that be best way of providing > the first network setup to users? I would think the majority of folks > do something akin to this?Yep, I keep mine simple too, in Redhat/Centos/Fedora terms my networking is # cat /etc/sysconfig/network-scripts/ifcfg-virbr0 DEVICE=virbr0 TYPE=Bridge ONBOOT=yes USERCTL=no BOOTPROTO=none IPADDR0=192.168.1.125 PREFIX0=24 GATEWAY0=192.168.1.1 DNS1=192.168.1.1 DEFROUTE=YES IPV6INIT=no NM_CONTROLLED=no # cat /etc/sysconfig/network-scripts/ifcfg-em1 DEVICE="em1" BOOTPROTO=none ONBOOT="yes" NM_CONTROLLED=no HWADDR=00:1E:8C:BC:53:36 TYPE=Ethernet BRIDGE=virbr0 NAME="System em1" UUID=1dad842d-1912-ef5a-a43a-bc238fb267e7 # cat /etc/rc.d/rc.local ifup virbr0 ifup em1 ip route add default via 192.168.1.1 then in each domU''s .cfg file vif = [ ''mac=00:16:36:xx:yy:zz, bridge=virbr0'' ] _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve removed xen-users, the subject seems appropriate for xen-devel now.> > I tried to use xl with xen4 for a while but due to bugs and missing > > features I had to go back to xm and xend. This is where the fun > > begins. In the past I used xend with network-bridge and for some > > strange reason (voodoo probably) I blindly accepted that script in > > the past and blamed myself for not appreciating it. But let''s be > > blunt and honest: the scripts, in particular the script that > > *modifies dom0 networking during xend startup* is the biggest piece > > of sh!# idea I have ever seen in Xen. It creates bridges, takes > > eth0 down, tortures dom0 with occult ip addr/route, brctl, sysctl > > and iptables awk/sed manipulations and then it has you looking at > > your screen yearning for the moment that ping timeouts become ping > > replies, telling you that your box is reachable again. This script > > is a malevolent demon from the sewers of Norman the Golgothan and > > the worst part is that network-bridge is also still the > > <laughs> > > recommended default! > > Can you point me to where it mentions that please?Sorry for the word flood & glad you had to laugh at my cynicism, I had wanted to use my Norman comparison at least once somewhere and this was the perfect opportunity. I''ll explain the subtlety of the above problem. So in late 2008 installed xen, then went through xend-config.sxp a couple of times and came across bridges as the documented feature and then two non-descriptive oneliners of alternatives. No need for NAT in my situation but I skipped routing due to a lack of understanding on my side. Heh what can I say, "know thine limitations?" I did search for info about the routing option and it did not make sense to me at the time. I think there was no such thing on the wiki back then either. So i shelved the idea for later consideration and a few months later there was a wiki update that did vaguely mention routing. But I already had a working bridge setup, a tentative one due to the mysterious voodoo happening somewhere on the machine. I probably have made this point enough now but if it happens to me it probably happens to others too. I had no clue why arp was being disabled, why the mac got fe:ff:ff:ff:ff:ff. I do now thanks to other puzzled people who emailed their questions and got them answered.> We realized that the networking setup is quite complex and would be > best left in the hands of the admins. The problem is that.. > > > > It surely seems so. Xen''s /etc/xen/scripts (another design fail, why > > not /usr?) and udev scripts are confusing ad-hoc bloatware routines > > and are not transparent at all. With the current xen4 I saw the > > premature advice to more or less ''prepare for migration from xm to > > xl''. Yet, xl supports less and is conflicting: there is no vifname, > > no ''xl new/delete'', no more python, no relocation and suddenly there > > is a conflict between ''xm start domain'' versus ''xl start > > /etc/xen/domain''. > > > > So new features emerge, adding to the confusion of the end user, > > while old problems are not being fixed properly. I wonder why, > > especially because it does not seem that xm and xend are the broken > > parts that need to be replaced by an unstable interface. > > > > What needs attention first and foremost are two things, first of > > which is real and wise effort into one simple, minimal script that > > just handles the minimum in a transparent way e.g. control the > > hypervisor, manage vms, manage the backend. Of course networking can > > be done on domain start too, but this has happen in an entirely > > different way from what it does and how it does it. This is so > > important because it gives more control to the user that runs Xen. > > It''s also a good moment to build in proper and mature support for > > IPv6. > > > > Secondly, the website and documentation should be cleaned up and > > revised where appropriate. The current situation is a mess that has > > a much too steep and incompatible learning curve right now - for > > example, a bridge should just not be named eth0 and a physical > > device should not be renamed at all. It''s fundamentally wrong, > > stupid, mad as hell and a PR failure for Xen to do it this way out > > of the box. No matter how often and detailed it has been documented > > on the website. > > .. the documentation and setup is sometimes quite hard. BTW, we are > going to do on Oct 26th a Documentation Day to clean up some of this > mess. Would you be intereted in helping along - perhaps in the > networking Wiki?Interested yes, perhaps. Could you please send me the details? As for the notion that documentation and setup can be hard, that''s exactly why I, like you, suggest to move away from OS configuration as much as possible -- obviously the hypervisor and kernel integration are the primary domain. Daily commits by committed people seem to support this theory. Moving away from OS integration narrows the scope of the Xen project which means less specific documentation needs to be maintained while making it easier for everyone to invoke their own methods. OS integration, even udev scripts, should move to the distribution''s developers and they can leave the rest up to the end user. Supporting everything from within Xen is like trying to be everybody''s friend. An impossible task, just ask Ghandi, he tried it and got shot. So take vif-nat. It does stuff with dhcp with all kinds of assumptions. I don''t even use ISC''s dhcpd, I use dnsmasq. And this script does stuff with iptables. I use shorewall and shreeked when I noticed a sudden change in my tables. A fast but not fun two or three hours later I found what did do it... after having searched through /usr and the domU. So I found this whole script an abomination. Here''s one other example from the function dhcp_arg_add_entry (!) # handle Red Hat, SUSE, and Debian styles, with or without quotes Just...don''t do this. These might be three big distributions but I wonder whether they outnumber all users of all other distributions. And the BSD camp doesn''t seem to be really fond of Xen at the moment either. In fact to me it seems like Xen''s arrows all focus on Linux while a bigger arrow marked KVM points towards the same direction. But I''m not deeply involved with any project''s development and never have been, so I could be totally wrong. I don''t even know Xen''s project roadmap, future outlook et cetera. I''ll even admit I''m just a boob! Still, it seems only logical to move to a more generic way of doing things while OS/distribution devs and admins take care of their respective domains. (Is this even remotely making sense to anyone?)> > > > I propose something like the following for xen networking: > > > > * Xen will not manipulate non-xen devices or a firewall under any > > circumstance, it might only add or substract routes and/or rules > > from the routing tables, > > Uh, what is ''non-xen'' devices? Like bridges?Yes. If I would use a Xen created bridge it has STP disabled, more occult features, It''s evil. So Xen must not create or touch those and also not touch the routing/iptables with automated scripts. It is clearly a much better idea to create transparent per-domain networking options. Can be done in different ways like with pre/post-up/down networking scripts that the dev/admin has to write. It still allows for Xen to supply examples and suggest sane defaults. This time transparently, making it all easier to maintain for everyone because more modular. There can be a syntax check that spawns a warning if a vif is configured without configured settings/scripts etc...> > > * Allow for networking configuration per domU. For example let > > networking per device be nat, routed, bridged or custom, where > > all name the interface and bring it up; nat only adds the ip to > > the routing table; routed could be an array of routes and rules > > that need to be added or subtracted from various routing tables and > > it might support proxyarp; bridged turns off arp, sets the mac on > > the vif and then adds the interface to a bridge that should already > > be created by the user; and custom is a custom set of unmanaged > > commands after creating and destroying a domain. > > You lost me. <sigh> I am using a bridge configuration and just > do: > > auto lo > iface lo inet loopback > > auto switch > iface switch inet static > address 192.168.101.16 > netmask 255.255.255.0 > gateway 192.168.101.1 > bridge_ports eth2 > > And just use that ''bridge=switch'' in all my configuration. And that > seems to work just fine - wouldn''t that be best way of providing > the first network setup to users? I would think the majority of folks > do something akin to this?Well I said a lot of things but not that it''s broken. :) Not that I couldn''t, though. The above stuff works in Debian, but Arch Linux has no ifup, neither does bsd. So those maintainers have to port Xen''s code with patches making it time expensive (and painfully dull). Instead the whole routine that removes interfaces, adds new bridges, sets iptables, etc. should simply be deleted. Not Xen''s business; makes grown men cry. How the majority of users handles it is not really relevant imho because they are also bound to what Xen supplies. A lot of good people see no obvious choice and a lot of good people have no clue about STP. Consequently you''ll find a lot of the same help requests on various forums and lists when searching for clues. I tried to make sense of the minimum that the xen scripts do (with xenstore, vif creation, device and udev interaction etc.) but the jungle of 118 functions (grep ''()'' /etc/xen/scripts/* | wc -l, take or leave a few) was not helping. I''d appreciate it if someone would write me a detailed step by step hierarchy that explains what happens right in a Xen4.2 dom0 from when a domain is created until it''s running, what might happen in between, and what happens from a domain''s shutdown or crash to domain deletion.> > So, with that off my chest and the second line of my network-bridge > > being the words "exit 0" Xen lets my dom0 configuration alone > > like it is supposed to do. While KVM is becoming a ''next cool > > thing'' for many people I would still prefer a separate hypervisor > > so now the fat just has to be removed from Xen. > > > > I am all for removing fat. Do you have links to some of particularly > bad Wiki pages that should be heavily audited?I would be more entertained to see how far I can take this plan and if/how discussion will shape it into a workable form. I think that the entire networking section and more can be wholly rewritten together with new scripts. Mind that I am only talking about core Xen and not about XCP/libvirt/foo/bar, in fact I don''t even have experience with any of that and I simply assume that they handle stuff in their own way. One question that has been bugging me is the xm/xl thing. What are the exact plans, will xl indeed be phased in while xm will be phased out and what will/won''t be supported by xl? Or did I misinterpret it. Imho a suggestion to switch over should only be come out when xl becomes ''distribution-grade stable''. Right now it''s just a tool worth testing. (-That''s what "she" said..) _donduq. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, 2011-10-12 at 05:50 +0100, D. Duckworth wrote:> > auto lo > > iface lo inet loopback > > > > auto switch > > iface switch inet static > > address 192.168.101.16 > > netmask 255.255.255.0 > > gateway 192.168.101.1 > > bridge_ports eth2 > > > > And just use that ''bridge=switch'' in all my configuration. And that > > seems to work just fine - wouldn''t that be best way of providing > > the first network setup to users? I would think the majority of > folks > > do something akin to this? > > Well I said a lot of things but not that it''s broken. :) Not that I > couldn''t, though. The above stuff works in Debian, but Arch Linux has > no ifup, neither does bsd.If you know how to setup a bridge on those systems please could you update http://wiki.xen.org/xenwiki/HostConfiguration/Networking to add links to the relevant Arch / BSD documentation. A couple of simple examples for the most common case would also be useful.> So those maintainers have to port Xen''s code with patches making it > time expensive (and painfully dull). Instead the whole routine that > removes interfaces, adds new bridges, sets iptables, etc. should > simply be deleted.We agree and this is why with the xl toolstack we do not support the use of these scripts or ever call out to them automatically (they haven''t actually been deleted, since xend still uses them). With xl we recommend that folks use their distribution provided mechanisms to setup the host network configuration. We made some attempt mitigate the network-* insanity for xend users, by making it such that the script will only do the mad things if it is (heuristically) detected that the admin hasn''t already set something up themselves, it''s not clear how effective this strategy was in practice, even better is to just comment out the relevant line in your xend-config.sxp IMHO. Hopefully it is explained below why we aren''t doing any wholesale reworking of how xend does things. None of that really addresses the complexity of the existing vif-* scripts. I suspect that deprecating the network-* ones for xl has effectively deprecated all but the vif-bridge one since e.g. vif-nat depends quite heavily on the setup which network-nat has done. There is probably scope for providing a much simpler vif-bridge script for use with xl, the existing one does have some odd stuff in it. For more complex scenarios like nat etc there is certainly value in having examples of how people have done stuff and as James suggested we''d certainly like to see people posting (or writing up on the wiki) their own configurations and scripts etc.> One question that has been bugging me is the xm/xl thing. What are the > exact plans, will xl indeed be phased in while xm will be phased out > and what will/won''t be supported by xl? Or did I misinterpret it. Imho > a suggestion to switch over should only be come out when xl becomes > ''distribution-grade stable''.xend has been effectively unmaintained for several releases now and there is nobody who is willing to step up and support it. Unless someone steps up as a maintainer it will become more deprecated with time and in a few releases I expect it will be removed from the tree. On the other hand we are actively developing xl and supporting it. In 4.1 we recommended that people try xl and report the bugs and missing features which would prevent them from transitioning from xend. We are doing our best to address these bugs and short-comings as they are reported to us. It is our hope that we can fully recommend xl in the 4.2 or 4.3 time scale. Obviously if people delay trying xl until we''ve switched to it then there is something of a chicken and egg problem wrt making sure it supports their needs. It seems that you have several issues which are impacting you but I''m having trouble digesting them all out of your mails. Posting individually about any the specific issues which you have that are preventing you from using xl (or indeed Xen generally) will ensure a much greater chance that someone will notice and do something about them. I''m sorry to say that posting long rants is unlikely to have the same effect... Thanks, Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-13 18:04 UTC
Re: [Xen-devel] XEN - networking and performance
> > .. the documentation and setup is sometimes quite hard. BTW, we are > > going to do on Oct 26th a Documentation Day to clean up some of this > > mess. Would you be intereted in helping along - perhaps in the > > networking Wiki? > > Interested yes, perhaps. Could you please send me the details? As forhttp://lists.xensource.com/archives/html/xen-users/2011-09/msg00494.html and #xendocday on Oct 26th. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel