Hi All, Just wondering if Xen/XCP aligns local storage to 4K boundaries when using 4K formatted storage such as SSD''s? If not, is there a documented way to manually perform this alignment? Regards, Dominic Ryan
Hi,> Just wondering if Xen/XCP aligns local storage to 4K boundaries when > using 4K formatted storage such as SSD''s? If not, is there a > documented way to manually perform this alignment?Linux basically does that kind of alignment for you (based on the information the hardware hands over to the kernel; WD Green disks for example pretend to have 512B size so the alignment does not work). But this information is only available on Dom0; the xen storage backend uses a hardcoded 512B block size for passing devices through to DomU[1]. This has close to no effect when doing all the partitioning and filesystem creation in Dom0 (where the necessary information is available) or when manually specifying the correct options to fdisk/mkfs in DomU (which is a pain). For accessing data, Linux uses a default block size of 4K anyways. -- Adi [1] http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1745
On 05/06/2013 03:41 AM, dominic.ryan@it-hq.org wrote:> Hi All, > > Just wondering if Xen/XCP aligns local storage to 4K boundaries when > using 4K formatted storage such as SSD''s? If not, is there a documented > way to manually perform this alignment?You mean in the context of using a file to contain the virtual disk image? Just put it on a FS that is using 4KB sectors, and that is aligned to a 4KB boundary, and when you are installing the guest make sure you set up the FS to also use 4KB blocks. Gordan
Hey,> You mean in the context of using a file to contain the virtual disk > image? Just put it on a FS that is using 4KB sectors, and that is > aligned to a 4KB boundary, and when you are installing the guest > make sure you set up the FS to also use 4KB blocks.Hmmm, that depends on what you do within the image: In case you create partitions within those images you need to assure that those partitions are aligned to use 4K blocks which boils down to using GPT for the partition table. When you create a file system on the raw image you don''t need to do anything: the blocks are already aligned and it does not matter what blocksize you use for creating the filesystem; so 512B is ok too. I actually never used image files but only storage devices like lvm volumes. But that depends on the use case, of course. -- Adi
On 2013-05-07 00:31, Adi Kriegisch wrote:> Hey, > >> You mean in the context of using a file to contain the virtual disk >> image? Just put it on a FS that is using 4KB sectors, and that is >> aligned to a 4KB boundary, and when you are installing the guest >> make sure you set up the FS to also use 4KB blocks. > Hmmm, that depends on what you do within the image: In case you > create > partitions within those images you need to assure that those > partitions are > aligned to use 4K blocks which boils down to using GPT for the > partition > table. > When you create a file system on the raw image you don''t need to do > anything: the blocks are already aligned and it does not matter what > blocksize you use for creating the filesystem; so 512B is ok too. > I actually never used image files but only storage devices like lvm > volumes. But that depends on the use case, of course. > > -- Adi > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-usersThanks for the replies everyone. Adi, just so I''m 100% clear on this. Are you saying that 4k aligned or not is irrelevant due to the 512 hard-coded issue with DomU? Is the performance hit you were seeing in your bug report still applicable to non RAID environments? Reason I ask is that with a single Windows 2008 R2 VM on XCP 1.6 all loaded on a Samsung 840 500GB SSD I am getting IOPS and transfer rates in IOMeter that are a fraction (as in 5%) of the manufacturers claims. I think I''ll blow away my XCP install and put Windows 2008 R2 on directly and see what I get out of IOMeter there using the same testing profiles. Thanks
> > Adi, just so I''m 100% clear on this. Are you saying that 4k aligned or > not is irrelevant due to the 512 hard-coded issue with DomU? Is the > performance hit you were seeing in your bug report still applicable to > non RAID environments? Reason I ask is that with a single Windows 2008 > R2 VM on XCP 1.6 all loaded on a Samsung 840 500GB SSD I am getting IOPS > and transfer rates in IOMeter that are a fraction (as in 5%) of the > manufacturers claims. I think I''ll blow away my XCP install and put > Windows 2008 R2 on directly and see what I get out of IOMeter there > using the same testing profiles. >When you type " fsutil fsinfo ntfsinfo c:" at a command prompt, what does windows say for "Bytes Per Physical Sector" and "Bytes Per Sector"? James
Hi,> Reason I ask is that with a single Windows 2008 R2 VM on XCP 1.6 all loaded on > a Samsung 840 500GB SSD I am getting IOPS and transfer rates in IOMeter that > are a fraction (as in 5%) of the manufacturers claims.Quite interested to hear your conclusions if any. We ended up dropping XCP as a platform with the last bunch of VMs being replaced last Christmas, due to IO and networking performance issues. We were never able to get more than 20 MBps out of the VM disks, and network performance was poor enough for it to be impossible to run heavily loaded load balancers (IPVS) on them. I am sure our problems were due to configuration mistakes, at least I hope so, but we have never seen anything like it in our new environment (on the same hardware). Best regards Jan
Topposting - I know - limitations of mobile device. Sorry. I am not disputing anything you''ve said - however for what its worth I had the opposite experience. After buying hardware that I checked was on the vmware list I found it under performed, didn''t support the raid controller properly and had odd networking issues dealing with our vlans. Xen was an improvement and I''m looking forward to pushing the boundaries with xcp and being able to solve problems due to the more open nature. Of course. Miss a few things. Like the ability to change network (and other) config options on a running vm... But I figure that I probably just haven''t discovered how to do these things yet. When I had issues on vmware I had trouble getting to the right people and getting them to take the issue seriously. The xen/xcp community seems a lot more open / engaging. For that I''m very thankful. Regards returned. Mitch. ----- Original Message ----- From: Jan-Aage Frydenbø-Bruvoll [mailto:jan@architechs.eu] Sent: Monday, May 06, 2013 11:41 PM To: Xen-users@lists.xen.org <Xen-users@lists.xen.org> Subject: Re: [Xen-users] SSD 4K Alignment? Hi,> Reason I ask is that with a single Windows 2008 R2 VM on XCP 1.6 all loaded on > a Samsung 840 500GB SSD I am getting IOPS and transfer rates in IOMeter that > are a fraction (as in 5%) of the manufacturers claims.Quite interested to hear your conclusions if any. We ended up dropping XCP as a platform with the last bunch of VMs being replaced last Christmas, due to IO and networking performance issues. We were never able to get more than 20 MBps out of the VM disks, and network performance was poor enough for it to be impossible to run heavily loaded load balancers (IPVS) on them. I am sure our problems were due to configuration mistakes, at least I hope so, but we have never seen anything like it in our new environment (on the same hardware). Best regards Jan _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Hi Jan, Are you using the GPLPV drivers for windows? These should be considered mandatory for anyone expecting performance out of Windows on Xen. Cheers, Greg On Tue, May 7, 2013 at 6:41 PM, Jan-Aage Frydenbø-Bruvoll <jan@architechs.eu> wrote:> Hi, > > > Reason I ask is that with a single Windows 2008 R2 VM on XCP 1.6 all > loaded on > > a Samsung 840 500GB SSD I am getting IOPS and transfer rates in IOMeter > that > > are a fraction (as in 5%) of the manufacturers claims. > > Quite interested to hear your conclusions if any. We ended up dropping XCP > as a platform with the last bunch of VMs being replaced last Christmas, due > to IO and networking performance issues. We were never able to get more > than 20 MBps out of the VM disks, and network performance was poor enough > for it to be impossible to run heavily loaded load balancers (IPVS) on them. > > I am sure our problems were due to configuration mistakes, at least I hope > so, but we have never seen anything like it in our new environment (on the > same hardware). > > Best regards > Jan > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Hey,> Adi, just so I''m 100% clear on this. Are you saying that 4k aligned > or not is irrelevant due to the 512 hard-coded issue with DomU? Is > the performance hit you were seeing in your bug report still > applicable to non RAID environments?Those are two different issues: You already have a file system that is aligned. On top of that there are image files containing domU file systems. No need to do alignments here. The issue I had was quite different: There was a RAID6 with 6 data disks (instead of 4) and I also found some other issues with alignment. On top of that there is what results from not passing through physical disk specs.> Reason I ask is that with a > single Windows 2008 R2 VM on XCP 1.6 all loaded on a Samsung 840 > 500GB SSD I am getting IOPS and transfer rates in IOMeter that are a > fraction (as in 5%) of the manufacturers claims. I think I''ll blow > away my XCP install and put Windows 2008 R2 on directly and see what > I get out of IOMeter there using the same testing profiles.Pfff... there are many reasons that can happen. First and foremost did you install pvgpl drivers and ensure they work? IMHO using images isn''t the fastest storage backend; but when using it, did you give enough RAM to your Dom0? Did you disable balooning and pin CPUs (there is more than one CPU attached to Dom0, right?)? So I am pretty sure, your issue has nothing to do with 4K alignment. ;-) -- Adi
Hi Greg,> Are you using the GPLPV drivers for windows? These should be considered mandatory for anyone expecting performance out of Windows on Xen.No all our VMs were Linux - we were pretty desperate (having both SSD and FusionIO storage to get deployed) and tried more or less every pre-built distro image out there along with all kinds of hand-built kernels (we''re mainly a Gentoo shop). All to no avail. We weren''t able to test further as we were at the brink of losing our customers, so there might very well have been things we could have done differently, but nothing that 3 months of googling and experimenting could uncover. Best regards Jan
Hi Mitch, Absolutely - and for the record, we have seen worse problems on other platforms, so I am definitely not out XCP/Xen-bashing. We ended up with a home-hacked LXC rig, which I am now trying to get rid of, as we just can''t scale it properly. We''re investigating a few different options and they -will- involve either Xen or KVM - at least this time around we have a running platform and will have much more time for un-panicked investigations. To be honest I''m just very curious as to what the solution here might be - maybe we made an error or forgot something that threw us into the same hole. Best regards Jan
On 07/05/13 16:41, Jan-Aage Frydenbø-Bruvoll wrote:> Hi, > >> Reason I ask is that with a single Windows 2008 R2 VM on XCP 1.6 all loaded on >> a Samsung 840 500GB SSD I am getting IOPS and transfer rates in IOMeter that >> are a fraction (as in 5%) of the manufacturers claims. > Quite interested to hear your conclusions if any. We ended up dropping XCP as a platform with the last bunch of VMs being replaced last Christmas, due to IO and networking performance issues. We were never able to get more than 20 MBps out of the VM disks, and network performance was poor enough for it to be impossible to run heavily loaded load balancers (IPVS) on them. > > I am sure our problems were due to configuration mistakes, at least I hope so, but we have never seen anything like it in our new environment (on the same hardware).Not sure exactly what Linux kernel version is used on XCP, or what disks/etc you were using, but I can confirm that there was definitely a bug in Linux 2.6 from Debian Squeeze which seriously affected SSD performance (ie, reduced it to worse that HDD speed), which was fixed by upgrading to debian squeeze backports kernel 3.0.0 I think from memory. I don''t have the bug details on hand right now, but if you are interested and can''t find it, let me know and I''ll dig it up. Regards, Adam -- Adam Goryachev Website Managers www.websitemanagers.com.au
Hi Adam,> Not sure exactly what Linux kernel version is used on XCP, or what disks/etc you were using, > but I can confirm that there was definitely a bug in Linux 2.6 from Debian Squeeze which seriously > affected SSD performance (ie, reduced it to worse that HDD speed), which was fixed by upgrading > to debian squeeze backports kernel 3.0.0 I think from memory.Now that''s interesting. The last kernel we tested on our XCP rig was 2.6.39, which still had the problem. You wouldn''t happen to have an idea whether the same bug was present in other kernels as well? Best regards Jan
On 08/05/13 00:01, Jan-Aage Frydenbø-Bruvoll wrote:> Hi Adam, > >> Not sure exactly what Linux kernel version is used on XCP, or what disks/etc you were using, >> but I can confirm that there was definitely a bug in Linux 2.6 from Debian Squeeze which seriously >> affected SSD performance (ie, reduced it to worse that HDD speed), which was fixed by upgrading >> to debian squeeze backports kernel 3.0.0 I think from memory. > Now that''s interesting. The last kernel we tested on our XCP rig was 2.6.39, which still had the problem. You wouldn''t happen to have an idea whether the same bug was present in other kernels as well? >Actually, it was in Linux mainline kernel, and had not been patched in the latest Debian stable kernel (squeeze). I am positive it was fixed before 3.0.0, and it might have been fixed by 2.6.39.... A very quick google search gives this: http://article.gmane.org/gmane.linux.ide/45053 From there, you should be able to find when it was added/removed from the kernel and whether the versions you used were affected or not. PS, with my current Debian wheezy xen boxes, and my Debian squeeze iSCSI server (with kernel and iscsi from backports), and 2 x 1G ethernet for iSCSI on the xen boxes and 8 x 1G ethernet on the iSCSI box, I get around 220MB/sec random read/write from the Windows 2003 domU. PS, I''m using 5 x Intel 480GB SSD''s in RAID5 on the iSCSI server, with DRBD which replicates to a second iSCSI server. Using Intel dual port (xen) and quad port (iSCSI) cards. The iSCSI server gets around 1.6GB/s random write, and 2.6GB/s random read... (read speed may be a bit wrong with my memory, but write speed is correct because I know I can at least fill my 8 x 1Gbps ethernet if I had 4 or more xen boxes hitting their disks hard). Regards, Adam -- Adam Goryachev Website Managers Ph: +61 2 8304 0000 adam@websitemanagers.com.au Fax: +61 2 8304 0001 www.websitemanagers.com.au
Hi Adam, Thank you very much for your thorough response. I look forward to testing this in more detail as soon as we get the next shipment of hardware in. Btw, our performance numbers were in the region of 10% of what you just posted. Best regards Jan