Hi, comparing disk I/O performance of domain0 running gentoo and unprivileged domain with ttylinux (tried multiple times, results differ in ~ 1sec range): In domain0: zirafa ~ # time dd if=/dev/hda of=/dev/null bs=1M count=500 500+0 records in 500+0 records out real 0m27.344s user 0m0.000s sys 0m2.280s zirafa ~ # hdparm -t /dev/hda /dev/hda: Timing buffered disk reads: 56 MB in 3.03 seconds = 18.48 MB/sec ----------------------------- And for unprivileged domains: root@tiny ~ # time dd if=/dev/sdb1 of=/dev/null bs=1M count=500 500+0 records in 500+0 records out real 0m41.962s user 0m0.010s sys 0m1.920s root@tiny ~ # hdparm -t /dev/sdb1 /dev/sdb1: Timing buffered disk reads: 64 MB in 5.32 seconds = 12.03 MB/sec ioctl 00001261 not supported by Xen blkdev hdparm: BLKFLSBUF: Function not implemented ioctl 0000031f not supported by Xen blkdev hdparm: HDIO_DRIVE_CMD: Function not implemented I''ve tried exporting /dev/hda as /dev/sdb, /dev/sdb1 and /dev/hda, all of them in read-only mode. Results look very similar, only `hdparm -t /dev/hda` from domainU complains bit more: root@tiny ~ # hdparm -t /dev/hda /dev/hda: Timing buffered disk reads: 64 MB in 5.38 seconds = 11.90 MB/sec ioctl 00001261 not supported by Xen blkdev hdparm: BLKFLSBUF: Function not implemented ioctl 0000031f not supported by Xen blkdev hdparm: HDIO_DRIVE_CMD: Function not implemented [XEN:vbd_update:drivers/xen/blkfront/blkfront.c:194] > [XEN:vbd_update:drivers/xen/blkfront/blkfront.c:195] < I''m using some older version of Xen/2, compiled on 14 Nov 2004, machine is celeron/466 with 256MB of RAM, 64 for dom0. ttylinux.conf: kernel = "/boot/vmlinuz-2.6.9-xenU" memory = 64 name = "ttylinux" nics = 1 ip = "10.18.6.10" disk = [''file:/home/storage/ttylinux-xen,sda1,r'',''phy:hda,hda,r''] root = "/dev/sda1 ro" ttylinux''s rootfs is exported from ext3 fs via loopback device, could it be the cause of troubles? -jkt -- cd /local/pub && more beer > /dev/mouth ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Thu, 2005-01-13 at 10:06, Jan Kundrát wrote:> ttylinux''s rootfs is exported from ext3 fs via loopback device, could it > be the cause of troubles?I think that''s a reasonable answer. It should be easy to test though. Try your same test on dom0 and use /dev/loopN where N is the loopback device that has your rootfs on it. The difference between testing /dev/loopN access on dom0 and the virtual block device on domU should tell you the Xen-imposed performance penalty. I did a quick test on my system and there was a 50% slowdown using the loopback device doing your test so I imagine it''s that. Regards,> -jkt-- Anthony Liguori Samba, Linux/Windows Interoperability Linux Technology Center (LTC) - IBM Austin E-mail: aliguori@us.ibm.com Phone: (512) 838-1208 Tie Line: 678-1208 ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Anthony Liguori wrote:> I think that''s a reasonable answer. It should be easy to test though. > Try your same test on dom0 and use /dev/loopN where N is the loopback > device that has your rootfs on it. >Only rootfs is exported from loopback device and I''m benchmarking access to dom0''s /dev/hda exported to domU as read-only. As the rootfs image is quite small (16MB), I can''t easily test access speeds because it will fit into cache.> The difference between testing /dev/loopN access on dom0 and the virtual > block device on domU should tell you the Xen-imposed performance > penalty. > > I did a quick test on my system and there was a 50% slowdown using the > loopback device doing your test so I imagine it''s that.Accessing exported loopback or some other device? -jkt -- cd /local/pub && more beer > /dev/mouth ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Anthony Liguori wrote:> Is /dev/hda what dom0 is mounted from? If so, can you try the test > again with a neutral partition (one that neither has mounted)?yes, dom0 filesystems are on partitions located on hda. I''ve tried exporting /dev/hda7 (not used neither mounted from dom0) as sdb7 and I can get about 12.3 MB/s. From dom0 I get about 14.8 MB/s. Tested by `time dd if=/dev/{s|h}hda7 of=/dev/null bs=1M`, about 4.5GB of data. -jkt -- cd /local/pub && more beer > /dev/mouth ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Sat, 2005-01-15 at 08:36, Jan Kundrát wrote:> Anthony Liguori wrote: > yes, dom0 filesystems are on partitions located on hda.My theory is that since hda is a single disk if you''ve got dom0 reading and writing to it and another partition reading and writing to the same disk at the same time both are going to be slower than if you were just writing to hda on dom0. That''s why I suggest trying it with a neutral device.> I''ve tried exporting /dev/hda7 (not used neither mounted from dom0) as > sdb7 and I can get about 12.3 MB/s. From dom0 I get about 14.8 MB/s.That seems pretty reasonable. Doesn''t seem like there''s a problem. Sharing partitions between dom0 and domU seems like a bad idea. Look through the threads in the list about filesystem corruption.> Tested by `time dd if=/dev/{s|h}hda7 of=/dev/null bs=1M`, about 4.5GB of > data. > > -jkt-- Anthony Liguori Linux Technology Center (LTC) - IBM Austin E-mail: aliguori@us.ibm.com Phone: (512) 838-1208 ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Anthony Liguori wrote:> On Sat, 2005-01-15 at 08:36, Jan Kundrát wrote: > >>Anthony Liguori wrote: >>yes, dom0 filesystems are on partitions located on hda. > > > My theory is that since hda is a single disk if you''ve got dom0 reading > and writing to it and another partition reading and writing to the same > disk at the same time both are going to be slower than if you were just > writing to hda on dom0.OK, I''ll try to add another device and play with it. BTW, both dom0 and the domain<n> (running ttylinux) was idle, almost without disk activity. I just run `hdparm` inside dom0, waited for results, dtto in domainU, and repeated several times.> That''s why I suggest trying it with a neutral device. > > >>I''ve tried exporting /dev/hda7 (not used neither mounted from dom0) as >>sdb7 and I can get about 12.3 MB/s. From dom0 I get about 14.8 MB/s. > > > That seems pretty reasonable. Doesn''t seem like there''s a problem. > Sharing partitions between dom0 and domU seems like a bad idea. Look > through the threads in the list about filesystem corruption.30% performance loss seems like a problem for me ;-), comparing to <3% (iirc) in xen''s benchmarks. I just want to find out if the problem is in Xen. Of course I''m *not* sharing the same partition between domains, I''m only performing read benchmarks on the same disk location. -jkt -- cd /local/pub && more beer > /dev/mouth ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It''s fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel