Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI passthrough. The relevant parts of the DomU configuration are: name = ''freenas-hvm'' kernel = "/usr/lib/xen-4.0/boot/hvmloader" builder = ''hvm'' memory = 1024 shadow_memory = 8 vcpus = 1 device_model = ''/usr/lib/xen-4.0/bin/qemu-dm'' disk = [ ''phy:/dev/sdc,hda,w'' ] # /dev/sdc is the USB stick pci = [ ''00:11.0'' ] # This is the SATA controller with 5 drives vif = [''bridge=vlan14'' ] boot=''dc'' sdl=0 vnc=1 vnclisten=''0.0.0.0'' vncconsole=1 stdvga=0 The SATA controller according to ''lspci'': 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] (rev 40) Everything "works", but is it painfully slow. Reading from a single SATA drive within the DomU gives me about 0.5MB/s: [root@freenas /dev]# dd if=/dev/ada1 of=/dev/null skip=100000 bs=4096 count=1000 1000+0 records in 1000+0 records out 4096000 bytes transferred in 8.058105 secs (508308 bytes/sec) Concurrent reads from all five SATA drives show that they''re able to achieve this speed all at the same time: [root@freenas /dev]# for disk in ada1 ada2 ada3 ada4 ada5> do dd if=/dev/$disk of=/dev/null bs=4096 count=1000 & > done4096000 bytes transferred in 8.049052 secs (508880 bytes/sec) 4096000 bytes transferred in 8.070050 secs (507556 bytes/sec) 4096000 bytes transferred in 8.071446 secs (507468 bytes/sec) 4096000 bytes transferred in 8.447751 secs (484863 bytes/sec) 4096000 bytes transferred in 8.501915 secs (481774 bytes/sec) The USB stick, OTOH, passed through as a block device? It reads 18x faster at around 9MB/sec [root@freenas /dev]# dd if=/dev/ada0 of=/dev/null bs=4096 count=1000 1000+0 records in 1000+0 records out 4096000 bytes transferred in 0.458198 secs (8939370 bytes/sec) From the Dom0 I can read from the USB stick at around 15Mb/s (slow media), and I can read from all SATA drives at around 80-100MB/s concurrently (after un-hiding the PCI device). If I pass the drives through individually (as I have done with the USB stick) the DomU reveals a 10MB/s ceiling. I can read from one disk at 10MB/s, or I can read from all at 2MB/s each. Thoughts? Does this rotten behavior even make sense? FreeBSD doesn''t support PV mode on amd64, so that''s out, but there are some PV drivers within HVM mode that I could be playing with. I don''t really grok the details of it, but I don''t think I have them working right now. I wonder if this is the ticket? I''d appreciate any advice that would help me to improve the situation. Thank you! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I would not read too much into these performance numbers. I have found that FreeNAS is pretty slow even directly on physical hardware. Using exactly the same hardware, OpenSolaris or Nexenta is way faster than FreeNAS. http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/ Why do you want to run FreeNAS in a VM. Is it for production purposes or testing purposes? I would not recommend running a file server as a VM if it is a performance sensitive production situation. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Chris Marget Sent: Friday, February 03, 2012 11:54 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Spectacularly disappointing disk throughput Greetings! I''ve got a FreeBSD-based (FreeNAS) appliance running as an HVM DomU. Dom0 is Debian Squeeze on an AMD990 chipset system with IOMMU enabled. The DomU sees six physical drives: one of them is a USB stick that I''ve passed through in its entirety as a block device. The other five are SATA drives attached to a controller that I''ve handed to the DomU with PCI passthrough. The relevant parts of the DomU configuration are: name = ''freenas-hvm'' kernel = "/usr/lib/xen-4.0/boot/hvmloader" builder = ''hvm'' memory = 1024 shadow_memory = 8 vcpus = 1 device_model = ''/usr/lib/xen-4.0/bin/qemu-dm'' disk = [ ''phy:/dev/sdc,hda,w'' ] # /dev/sdc is the USB stick pci = [ ''00:11.0'' ] # This is the SATA controller with 5 drives vif = [''bridge=vlan14'' ] boot=''dc'' sdl=0 vnc=1 vnclisten=''0.0.0.0'' vncconsole=1 stdvga=0 The SATA controller according to ''lspci'': 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] (rev 40) Everything "works", but is it painfully slow. Reading from a single SATA drive within the DomU gives me about 0.5MB/s: [root@freenas /dev]# dd if=/dev/ada1 of=/dev/null skip=100000 bs=4096 count=1000 1000+0 records in 1000+0 records out 4096000 bytes transferred in 8.058105 secs (508308 bytes/sec) Concurrent reads from all five SATA drives show that they''re able to achieve this speed all at the same time: [root@freenas /dev]# for disk in ada1 ada2 ada3 ada4 ada5> do dd if=/dev/$disk of=/dev/null bs=4096 count=1000 &> done4096000 bytes transferred in 8.049052 secs (508880 bytes/sec) 4096000 bytes transferred in 8.070050 secs (507556 bytes/sec) 4096000 bytes transferred in 8.071446 secs (507468 bytes/sec) 4096000 bytes transferred in 8.447751 secs (484863 bytes/sec) 4096000 bytes transferred in 8.501915 secs (481774 bytes/sec) The USB stick, OTOH, passed through as a block device? It reads 18x faster at around 9MB/sec [root@freenas /dev]# dd if=/dev/ada0 of=/dev/null bs=4096 count=1000 1000+0 records in 1000+0 records out 4096000 bytes transferred in 0.458198 secs (8939370 bytes/sec) From the Dom0 I can read from the USB stick at around 15Mb/s (slow media), and I can read from all SATA drives at around 80-100MB/s concurrently (after un-hiding the PCI device). If I pass the drives through individually (as I have done with the USB stick) the DomU reveals a 10MB/s ceiling. I can read from one disk at 10MB/s, or I can read from all at 2MB/s each. Thoughts? Does this rotten behavior even make sense? FreeBSD doesn''t support PV mode on amd64, so that''s out, but there are some PV drivers within HVM mode that I could be playing with. I don''t really grok the details of it, but I don''t think I have them working right now. I wonder if this is the ticket? I''d appreciate any advice that would help me to improve the situation. Thank you! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Feb 3, 2012 at 1:14 PM, <admin@xenhive.com> wrote:> I would not read too much into these performance numbers. I have found that FreeNAS is pretty slow even directly on physical hardware. Using exactly the same hardware, OpenSolaris or Nexenta is way faster than FreeNAS.I don''t expect it to be blazingly fast -- that''s not really a priority. I''m not testing ZFS filesystem performance, just sequential reads from the block device. At these numbers, five spindles working in concert will barely saturate a legacy 10Mb/s link, and individual passthrough of block devices is faster than passthrough of the PCI controller. Something''s seriously broken here :-)> Why do you want to run FreeNAS in a VM. Is it for production purposes or testing purposes? I would not recommend running a file server as a VM if it is a performance sensitive production situation.This project is for production home use. I''d like to have everything in my house run in one physical server, rather than lots of little ones. The old server was OpenSolaris Dom0 with DomUs running on zvol backstores. I''m trying to get away from Oracle, but want to retain ZFS. /chris
Hi Chris, yes the PV disk and network drivers make the difference between heaven and hell for FreeBSD domUs. Do not bother with any more benchmarks until you have them working :) 2012/2/3 Chris Marget <chris@marget.com>:> On Fri, Feb 3, 2012 at 1:14 PM, <admin@xenhive.com> wrote: >> I would not read too much into these performance numbers. I have found that FreeNAS is pretty slow even directly on physical hardware. Using exactly the same hardware, OpenSolaris or Nexenta is way faster than FreeNAS.FreeNAS will completely pwn both on a 256MB system ;) But yes, in general it''s slower; I also kept using FreeNAS instead of Nexenta due to the smaller footprint and, my, I loved the clean UI, until they redid it for (as the log said "adding more round edges")> I don''t expect it to be blazingly fast -- that''s not really a priority.Florian
On Fri, Feb 3, 2012 at 2:28 PM, Florian Heigl <florian.heigl@gmail.com> wrote:> yes the PV disk and network drivers make the difference between heaven > and hell for FreeBSD domUs. > Do not bother with any more benchmarks until you have them working :)> But yes, in general it''s slower; I also kept using FreeNAS instead of > Nexenta due to the smaller footprint and, my, I loved the clean UI, > until they redid it for (as the log said "adding more round edges")Hi Florian, I believe that I''ve gotten the PV drivers working at least partly correctly. The exact procedure I used for the kernel build is documented here: http://files.fragmentationneeded.net/freebsd/build_kernel.txt I dropped the new kernel in place of the old one in the FreeNAS /boot/kernel directory, and use the same XEN guest configuration: http://files.fragmentationneeded.net/freebsd/freenas-hvm.cfg I see xen-related lingo as the system boots: http://files.fragmentationneeded.net/freebsd/dmesg.txt The network interface now appears as "xn0", and the disks have moved around from ada* to ad* names. Running the same ''dd'' tests as before, I find that the boot device (ad0/xbd0 virual block device noted by dmesg) has much improved performance: 60MB/s, but the drives attached to the PCI-passthru SATA controller remain exactly where they were: 0.5MB/s It seems the the PV block driver hasn''t noticed the disks attached to the passed-through controller. What do you think? Should I even expect the PV drivers to help in this PCI passthrough scenario? Thank you!
Hi Chris, unfortunately, thats over my head. I think you''re correct that the PV drivers cannot possible handle a PCI passthrough device. With the FreeBSD domU I had built I also had to switch the device type in the Xen config, albeit that was still on Xen 4.0. I can''t read through your notes, just not enough time. But it can''t hurt to check the backend for your PV disks is also really "blkback". Me thinks. On the other hand I''m not 100% sure this would be still true on PVOPS host kernels. I havent been back to using PCI delegation since a few years ago; Adaptec SCSI controllers and their irq sharing flaws made my host crash and I was simply fed up with it. First thing to check with bad SATA performance on your passthroughed controller would be if the AHCI disk access works. I hope you delegated the full controller and not a single pci function of the controller? that would make me worry. It would be good if you could test the passthrough SATA performance in a HVM linux or windows domU to get some more data about this. But if I had only one shot: AHCI not working. Florian
Hi Florian,> First thing to check with bad SATA performance on your passthroughed > controller would be if the AHCI disk access works.Interesting point. My first citation at the beginning of this thread referenced disk devices at /dev/adaX. I believe that this means AHCI is working. ...But they''re now /at /dev/adX, so AHCI is not working currently. It doesn''t seem to make any difference, because the performance numbers are identical. I''ve since added ''device ahci'' to my kernel configuration and am building a new kernel right now. I hope to get ahci working again.> I hope you delegated the full controller and not a single pci function > of the controller? that would make me worry.I belive so. I''ve delegated ''00:11.0''. I think that ''.0'' is the function? It''s the only one on ''00:11'' # lspci -s 00:11 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] (rev 40) #> It would be good if you could test the passthrough SATA performance in > a HVM linux or windows domU to get some more data about this. > But if I had only one shot: AHCI not working.I tried to boot the debian squeeze installer as an HVM, but it kept coming up in PV mode, so I grabbed a Lenny installation image, and booted it like this: name = ''lenny-installer'' kernel = "/usr/lib/xen-4.0/boot/hvmloader" builder = ''hvm'' memory = 1024 device_model = ''/usr/lib/xen-4.0/bin/qemu-dm'' disk = [ ''phy:/dev/loop1,ioemu:hdb:cdrom,r'', ] pci = [ ''00:11.0'' ] boot=''d'' sdl=0 vnc=1 vnclisten=''0.0.0.0'' vncconsole=1 stdvga=0 I''m pretty confident that it''s an HVM because: # xm list -l lenny-installer | grep -A 1 image (image (hvm This guest is able to read from the SATA drives in excess of 100MB/s, while FreeBSD gives less than 0.5MB/s :( Does the test look like it is sound? Thanks again. /chris