I''d like to run an iscsi initiator in domain 0 and export LUNs to the other domains as VBDs. The XenoLinux config file does not give the option of enabling SCSI support. I know XenoLinux can''t talk to the hardware directly so it can''t use the actual hardware devices, but the SCSI layer could still be in place. Is there any reason why it isn''t there other than the assumption that without the drivers one doesn''t need it? Thanks. -Kip ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I''d like to run an iscsi initiator in domain 0 and export LUNs to the > other domains as VBDs. The XenoLinux config file does not give the > option of enabling SCSI support. I know XenoLinux can''t talk to the > hardware directly so it can''t use the actual hardware devices, but > the SCSI layer could still be in place. Is there any reason why it isn''t > there other than the assumption that without the drivers one doesn''t > need it?Nope. It just saves us having to drag in drivers/scsi/Config.in, which would contain a number of unusable hardware-device options. Copying the relevant generic portions of that config file into arch/xeno/config.in is one option, or just include the existing SCSI options file and ignore the obvious bad entries. -- Keir ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I''d like to run an iscsi initiator in domain 0 and export LUNs to the > other domains as VBDs. The XenoLinux config file does not give the > option of enabling SCSI support. I know XenoLinux can''t talk to the > hardware directly so it can''t use the actual hardware devices, but > the SCSI layer could still be in place. Is there any reason why it isn''t > there other than the assumption that without the drivers one doesn''t > need it?Your assumption is correct. I''ve just added the SCSI menu option from arch/i386/config.in (see attached), and built a xenolinux with SCSI and SCSI disk support compiled in. I''m not sure how you enable iSCSI support (of indeed if the standard kernel even has iSCSI support). Exporting disks to domains via iSCSI would be cool. There''s always the alternative of the much simpler ''enbd'', but iSCSI sounds nicer. I wander if there''s support for iSCSI root devices? (if not, it''s possible something could be bodged with an initrd initial ramdisk). Ian xenolinux-2.4.24-sparse/arch/xeno/config.in: 1.12 1.13 iap10 04/01/20 18:24:17 (modified, needs delta) @@ -108,6 +108,17 @@ endmenu +mainmenu_option next_comment +comment ''SCSI support'' + +tristate ''SCSI support'' CONFIG_SCSI + +if [ "$CONFIG_SCSI" != "n" ]; then + source drivers/scsi/Config.in +fi +endmenu + + if [ "$CONFIG_NET" = "y" ]; then source net/Config.in fi ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I''ve just added the SCSI menu option from arch/i386/config.in > (see attached), and built a xenolinux with SCSI and SCSI disk > support compiled in. I''m not sure how you enable iSCSI support > (of indeed if the standard kernel even has iSCSI support).Great - thanks. There is no stock support of iSCSI in Linux. The Redhat and Suse distributions both have one. I''ve always used the one from Cisco on sourceforge. The question is what do you do for a target? I believe that one has been written for Linux, but have never used it. I''ve written one for FreeBSD that could be easily ported to Linux. If the company that I''ve done it for doesn''t pay me for the last bit of work I''ve done, I''d probably rather open-source it than take the time and energy to pursue the more common alternative.> > Exporting disks to domains via iSCSI would be cool. There''s > always the alternative of the much simpler ''enbd'', but iSCSI > sounds nicer. I wander if there''s support for iSCSI root devices? > (if not, it''s possible something could be bodged with an initrd > initial ramdisk).iSCSI is the more general of the two. DataOnTap now supports exporting LUNs over FCP and iSCSI. One can create a LUN as "golden image" and clone it arbitrarily many times. Thanks to the COW nature of WAFL the only additional space required by the cloned LUNs is for modifications. The only way for an iSCSI root to work out of the box is to have an initiator with a BIOS, none that I know of currently do. One could play clever tricks by initially having a ramdisk and then switching. My intention is to have domain 0 boot from local disk, but have all of the non-privileged domains boot off of iscsi backed VBDs. -Kip> > > xenolinux-2.4.24-sparse/arch/xeno/config.in: 1.12 1.13 iap10 04/01/20 18:24:17 (modified, needs delta) > > @@ -108,6 +108,17 @@ > > endmenu > > +mainmenu_option next_comment > +comment ''SCSI support'' > + > +tristate ''SCSI support'' CONFIG_SCSI > + > +if [ "$CONFIG_SCSI" != "n" ]; then > + source drivers/scsi/Config.in > +fi > +endmenu > + > + > if [ "$CONFIG_NET" = "y" ]; then > source net/Config.in > fi >------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> My intention is to have domain 0 boot from local disk, but have all of > the non-privileged domains boot off of iscsi backed VBDs.Do you mean standard Xen VBDs or is there an iSCSI thing called "VBDs"? If you mean what I think you mean, I''m not sure it would currently work: Xen''s / XenoLinux''s VBD code AFAIK are purely capable of virtualising local physical disks. To clarify what I mean (apologies if this is redundant info) - VBDs are implemented at the Xen level, not at the XenoLinux level: when you create a VBD in dom0 for a guest domain, dom0 is telling Xen "grant this domain access to this bit of disk". The XenoLinux VBD driver then talks to Xen (not dom0) to access the VBD that have been created (the special case is that Dom0 has a vbd for each physical disk, giving the impression of a normal setup). Xen''s VBDs can''t re-export network-based block devices from dom0 (or indeed any other kind of device that indirects through network or filesystem layers in dom0), since xen is not aware of these higher layers. To re-export iSCSI drives from dom0, I think you''d currently need to use NFS or something similar - to re-export them appearing as "just another VBD" to the guest would require extra code. Is this relevant or do I have the wrong end of the stick? Mark ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> > To re-export iSCSI drives from dom0, I think you''d currently need to use > NFS or something similar - to re-export them appearing as "just another > VBD" to the guest would require extra code.And extra layers + latency etc.> > Is this relevant or do I have the wrong end of the stick?What you''re saying sounds exactly right. Plus we can''t stick a SW initiator in Xen without a TCP stack. My only hope would be a HW initiator. How annoying. I wonder how much work it would be to support what I''m thinking about? Managing NFS root for n virtual machines is much more annoying to manage. It would also make this a much harder sell internally. -Kip ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> What you''re saying sounds exactly right. Plus we can''t stick a SW > initiator in Xen without a TCP stack. My only hope would be a HW > initiator. How annoying. I wonder how much work it would be to > support what I''m thinking about? Managing NFS root for n virtual > machines is much more annoying to manage. It would also make this > a much harder sell internally.You could still presumably just have all the domains connecting directly to the target via iSCSI? But I assume you wanted to re-export as VBDs to avoid using any weird ramdisk-based hacks in order to get effectively an iSCSI-based root filesystem in each guest, thus I realise this wouldn''t be the ideal. Maybe you could NFS (or local partitions, possibly via the new virtual disk stuff) for each root fs and use that just for the basics, then use an iSCSI initiator in each domain to access all the interesting stuff? There are some plans (here at Intel Research Cambridge) to implement an iSCSI "virtual channel processor" (see paper at http://www.intel-research.net/Publications/Cambridge/110720030446_176.pd f), which would run the iSCSI protocol (with its own (optimised) net stack) in a domain on top of Xen and also appear like a normal device to guest Oses. This work won''t be ready for some time, though. However, it sounds like it would get exactly what you want (plus various other benefits). Another option might be to re-export the iSCSI devices from dom0 over Xen''s internal "network" using some other network-based protocol that could be used as a root fs. Yes, that is a very icky idea ;-) I imagine it would be possible to write some kind of user-space "proxy" that would access devices in dom0 in the normal user program fashion and then have XenoLinux drivers in guest domains talk to that proxy (either through the internal network, or by the upcoming interdomain communications facilities) - this could also be used to access weird things like dom0 disk files as block devices. You''d expect penalties in performance and quality of service provision by doing this. No-one''s currently working on this and I don''t know what the feeling is as to how worthwhile it would be... The Virtual Channel Processors are the neatest solution but will take a while. Other people may have suggestions, also... Mark ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> > Is this relevant or do I have the wrong end of the stick? > > What you''re saying sounds exactly right. Plus we can''t stick a SW > initiator in Xen without a TCP stack. My only hope would be a HW > initiator. How annoying. I wonder how much work it would be to > support what I''m thinking about? Managing NFS root for n virtual > machines is much more annoying to manage. It would also make this > a much harder sell internally.Once we have virtualised device drivers (ie. drivers running in isolated domains), Xen''s I/O architecture will be much more flexible. For example, you will be able to run a ''device domain'' with a virtual block-device interface to other guest OSes and which talks iscsi via a TCP stack. We''re aiming to implement this stuff for teh OSDI submission deadline in the middle of May, so it''s high on our priority list. -- Keir ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
If you wanted to get something up and running straight away, you might also want to look at unfsd (runs in user space, so can re-export the LUNs you import with iSCSI - I don''t think kernel NFSd will). To ease the pain of using NFS to manage multiple machines, you could try ClusterNFS (an enhancement of unfsd to make it easier to manage clusters - may be useful for you - this was mentioned by Bin Ren in an earlier thread). Also, the user level copy-on-write nfsd (mentioned by Ian Pratt in another thread) might be good although I don''t know who''s doing that or when it''ll be ready... Mark> -----Original Message----- > From: Kip Macy [mailto:kmacy@fsmware.com] > Sent: 20 January 2004 20:03 > To: Williamson, Mark A > Cc: xen-devel@lists.sourceforge.net > Subject: RE: [Xen-devel] iscsi > > > > > > To re-export iSCSI drives from dom0, I think you''d > currently need to use > > NFS or something similar - to re-export them appearing as > "just another > > VBD" to the guest would require extra code. > > And extra layers + latency etc. > > > > > > Is this relevant or do I have the wrong end of the stick? > > What you''re saying sounds exactly right. Plus we can''t stick a SW > initiator in Xen without a TCP stack. My only hope would be a HW > initiator. How annoying. I wonder how much work it would be to > support what I''m thinking about? Managing NFS root for n virtual > machines is much more annoying to manage. It would also make this > a much harder sell internally. > > -Kip > > >------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Wed, 2004-01-21 at 12:05, Williamson, Mark A wrote:> If you wanted to get something up and running straight away, you might > also want to look at unfsd (runs in user space, so can re-export the > LUNs you import with iSCSI - I don''t think kernel NFSd will). > > To ease the pain of using NFS to manage multiple machines, you could try > ClusterNFS (an enhancement of unfsd to make it easier to manage clusters > - may be useful for you - this was mentioned by Bin Ren in an earlier > thread). Also, the user level copy-on-write nfsd (mentioned by Ian > Pratt in another thread) might be good although I don''t know who''s doing > that or when it''ll be ready...The debian ''diskless'' packages used to be good for this task too, but I do not know if they are still being kept alive. Jacob ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Tue, 2004-01-20 at 19:34, Kip Macy wrote:> > I''ve just added the SCSI menu option from arch/i386/config.in > > (see attached), and built a xenolinux with SCSI and SCSI disk > > support compiled in. I''m not sure how you enable iSCSI support > > (of indeed if the standard kernel even has iSCSI support). > > Great - thanks. There is no stock support of iSCSI in Linux. > The Redhat and Suse distributions both have one. I''ve always used > the one from Cisco on sourceforge. The question is what do you do > for a target? I believe that one has been written for Linux, but > have never used it. I''ve written one for FreeBSD that could be easily > ported to Linux. If the company that I''ve done it for doesn''t pay me > for the last bit of work I''ve done, I''d probably rather open-source > it than take the time and energy to pursue the more common alternative.it was pointed out to me yesterday that the Intel iSCSI code for Linux (http://sourceforge.net/projects/intel-iscsi/) has a target driver. a quick glance through the readme suggests that the the target can be either a ram disk or a raw disk device (in which case the target driver runs in userland), I have never used this software so can''t comment on completeness nor performance.> > > > Exporting disks to domains via iSCSI would be cool. There''s > > always the alternative of the much simpler ''enbd'', but iSCSI > > sounds nicer. I wander if there''s support for iSCSI root devices? > > (if not, it''s possible something could be bodged with an initrd > > initial ramdisk). > > iSCSI is the more general of the two. DataOnTap now supports exporting > LUNs over FCP and iSCSI. One can create a LUN as "golden image" and > clone it arbitrarily many times. Thanks to the COW nature of WAFL the > only additional space required by the cloned LUNs is for modifications.to clarify, you are talking about netapp solutions here not generic iSCSI features, right? as i understand it you can configure block-level networked access to virtual disks on a netapp filer via iSCSI, and the filer can be configured to do COW for these virtual disks. if this is correct why don''t you not just configure each domain to talk iSCSI to the filer directly via their virtual network interfaces instead of exporting the virtual disks (LUNs) through VBDs from dom0 as your initial email seem to indicate? or is the root partition problem the issue? Cheers Rolf> The only way for an iSCSI root to work out of the box is to have an > initiator with a BIOS, none that I know of currently do. One could > play clever tricks by initially having a ramdisk and then switching. > > My intention is to have domain 0 boot from local disk, but have all of > the non-privileged domains boot off of iscsi backed VBDs. > > -Kip > > > > > > > > xenolinux-2.4.24-sparse/arch/xeno/config.in: 1.12 1.13 iap10 04/01/20 > 18:24:17 (modified, needs delta) > > > > @@ -108,6 +108,17 @@ > > > > endmenu > > > > +mainmenu_option next_comment > > +comment ''SCSI support'' > > + > > +tristate ''SCSI support'' CONFIG_SCSI > > + > > +if [ "$CONFIG_SCSI" != "n" ]; then > > + source drivers/scsi/Config.in > > +fi > > +endmenu > > + > > + > > if [ "$CONFIG_NET" = "y" ]; then > > source net/Config.in > > fi > > > > > ------------------------------------------------------- > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> > to clarify, you are talking about netapp solutions here not generic > iSCSI features, right? as i understand it you can configure block-level > networked access to virtual disks on a netapp filer via iSCSI, and the > filer can be configured to do COW for these virtual disks.Correct.> > if this is correct why don''t you not just configure each domain to talk > iSCSI to the filer directly via their virtual network interfaces instead > of exporting the virtual disks (LUNs) through VBDs from dom0 as your > initial email seem to indicate? or is the root partition problem the > issue? >For the first pass where I''m only running Linux that will, at least in principle, work. There are a couple of issues that make that approach more work. From a configuration standpoint I want the virtual machines that act as sandboxes for developers to look like a normal machine. This entails the iSCSI backing looking like a normal disk for all intents and purposes. Second, I want to be able to map a developer to a LUN and then build a domain backed by that on an arbitrary physical machine. This would have the additional benefit of fully anonymizing the hardware. In the near future I want to be able to run other operating systems that do not have iscsi initiator support, nor ever will, in virtual machines. For this NFS/RAMDISK root is not an option. Thus I need to take the LUN mapping approach. I''ll just have to hope that I can pull the Adaptec iSCSI HW initator driver into Xen, and that all the configuration tools will work. -Kip ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> In the near future I want to be able to run other operating systems that > do not have iscsi initiator support, nor ever will, in virtual machines. > For this NFS/RAMDISK root is not an option. Thus I need to take the LUN > mapping approach. I''ll just have to hope that I can pull the Adaptec > iSCSI HW initator driver into Xen, and that all the configuration tools > will work.On a more general note, Xen currently assumes that all vbds are backed by local disk. We need a mechanism to ''plumb'' a specified vbd such that read/write requests go to another domain (where arbitrary processing can be performed) rather than out to local disk. We need this for a whole bunch of different applications people want to use Xen for (honeypots, debugging, fault injection, hardware transparency etc.). Fortunately, this kind of thing is going to be quite a bit easier under the ring-1 I/O model. I think the performance will be pretty good -- we''ll never copy data, and attempt to minimize the number of protection domain switches through pipelining. Ian ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> On a more general note, Xen currently assumes that all vbds are > backed by local disk. We need a mechanism to ''plumb'' a specified > vbd such that read/write requests go to another domain (where > arbitrary processing can be performed) rather than out to local > disk. > > We need this for a whole bunch of different applications people > want to use Xen for (honeypots, debugging, fault injection, > hardware transparency etc.). > > Fortunately, this kind of thing is going to be quite a bit easier > under the ring-1 I/O model. I think the performance will be > pretty good -- we''ll never copy data, and attempt to minimize the > number of protection domain switches through pipelining.Exactly. VBD requests will go directly to the domain containign teh device driver (via some shared memory comms model). That domain can implement whatever it needs without having to bloat Xen at all (e.g., you could run a full-blown OS, with a TCP stack and anything else you need). In future I don''t think that any of the block-device or network code (not even the VBD interface) will reside in Xen -- it can all be done in driver domains. -- Keir ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi Ian, What kind of timeframes are we talking, more or less, to get the ring-1 I/O model stable? In what branch of Xen does this development happen, 1.3? Thanks, Jan On 22 Jan 2004, at 6:04 AM, xen-devel-request@lists.sourceforge.net wrote:> Fortunately, this kind of thing is going to be quite a bit easier > under the ring-1 I/O model. I think the performance will be > pretty good -- we''ll never copy data, and attempt to minimize the > number of protection domain switches through pipelining.------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> What kind of timeframes are we talking, more or less, to get the ring-1 > I/O model stable? In what branch of Xen does this development happen, > 1.3?The work is just starting, and will occur in the 1.3 unstable branch. I don''t think it will take too long (4-8 weeks) to get equivalent functionality to 1.2, but bells and whistles will take longer... Ian ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel