Ian Pratt
2005-Apr-15 21:44 UTC
RE: Disk naming (Was Re: [Xen-devel] [PATCH] Guest boot loadersupport[1/2])
I think I''d prefer not to complicate blkback, unless something''s fundamentally wrong with the design of the loopback device. Anyone know about this? The trick with this kind of thing is avoiding deadlock under low memory situations...> This brings up a really interesting point. Is there a good > story yet for how more complex devices can be created on > driver domains? For instance, how would you create an iSCSI > device that existed on a driver domain (or is this something > that wouldn''t be all that useful)? > > Can we assume an rexec capability between dom0 and a driver domain?I was anticipating a scheme whereby we use a second console connection to drive a tiny initrd in a driver domain. Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori
2005-Apr-15 22:11 UTC
[Xen-devel] Loopback Performance (Was Re: Disk naming)
Ian Pratt wrote:>I think I''d prefer not to complicate blkback, unless something''s >fundamentally wrong with the design of the loopback device. Anyone know >about this? The trick with this kind of thing is avoiding deadlock under >low memory situations... >I poked through the loopback code and it seems to be doing the reasonable thing. I decided to investigate for myself what the performance issues with the loopback device were. My theory was that the real cost was the double inode lookups (looking up the inodes in the filesystem on the loopback and then looking up the inodes on the host filesystem). To verify, I ran a series of primitive tests with dd. First I baselined the performance of writing to a large file (by running dd if=/dev/zero conv=notrunc) on the host filesystem. Then I created a loopback device with the same file and ran the same tests writing directly to the loopback device. I then created a filesystem on the loopback device, mounted it, then ran the same test on a file within the mount. The results are what I expected. Writing directly to the loopback device was equivalent to writing directly to the file (usually faster actually--I attribute that to buffering). Writing to the file within the filesystem on the loopback device was significantly slower (about a ~70% slowdown). If my hypothesis is right, that the slowdown is caused by the double inode lookups, then I don''t think there''s anything we could do in the blkback drivers to help that. This is another good reason to use LVM. This was all pretty primitive so take it with a grain of salt. Regards, Anthony Liguori _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mark Williamson
2005-Apr-15 23:29 UTC
[Xen-devel] Re: Loopback Performance (Was Re: Disk naming)
On Friday 15 April 2005 23:11, Anthony Liguori wrote:> Ian Pratt wrote: > >I think I''d prefer not to complicate blkback, unless something''s > >fundamentally wrong with the design of the loopback device. Anyone know > >about this? The trick with this kind of thing is avoiding deadlock under > >low memory situations... > > I poked through the loopback code and it seems to be doing the > reasonable thing. I decided to investigate for myself what the > performance issues with the loopback device were. My theory was that > the real cost was the double inode lookups (looking up the inodes in the > filesystem on the loopback and then looking up the inodes on the host > filesystem).I''m sorry but I don''t follow this. The inodes for the filesystem inside it only need to be looked up by the guest filesystem driver. The inode for the disk file only needs to be looked up once in dom0 when the file is opened (the metadata will then be cached). Am I missing something? The data you''ve collected are interesting though. I wonder if searching the LKML archives might yield any interesting discussion about the loop device''s behaviour. Cheers, Mark> To verify, I ran a series of primitive tests with dd. First I baselined > the performance of writing to a large file (by running dd if=/dev/zero > conv=notrunc) on the host filesystem. Then I created a loopback device > with the same file and ran the same tests writing directly to the > loopback device. > > I then created a filesystem on the loopback device, mounted it, then ran > the same test on a file within the mount. > > The results are what I expected. Writing directly to the loopback > device was equivalent to writing directly to the file (usually faster > actually--I attribute that to buffering). Writing to the file within > the filesystem on the loopback device was significantly slower (about a > ~70% slowdown). > > If my hypothesis is right, that the slowdown is caused by the double > inode lookups, then I don''t think there''s anything we could do in the > blkback drivers to help that. This is another good reason to use LVM. > > This was all pretty primitive so take it with a grain of salt. > > Regards, > Anthony Liguori_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori
2005-Apr-16 00:12 UTC
[Xen-devel] Re: Loopback Performance (Was Re: Disk naming)
Mark Williamson wrote:>On Friday 15 April 2005 23:11, Anthony Liguori wrote: > > >I''m sorry but I don''t follow this. The inodes for the filesystem inside it >only need to be looked up by the guest filesystem driver. The inode for the >disk file only needs to be looked up once in dom0 when the file is opened >(the metadata will then be cached). Am I missing something? > >I meant looking up the data blocks in the inode. You may be hitting triple-indirect blocks twice. I don''t know enough about the kernel level caching to say anything definitive. I do have some ideas who to ask though. Regards, Anthony Liguori>The data you''ve collected are interesting though. I wonder if searching the >LKML archives might yield any interesting discussion about the loop device''s >behaviour. > >Cheers, >Mark > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel