In the last couple of days we''ve been playing around with gnbd as an alternative to iSCSI, as we found that the performance of the current cisco linux iSCSI implementation was fairly awful talking to our NetApp hardware target. I''m happy to report that gnbd seems to work well. We''ve set up a number of machines with dom0 running both the gnbd client and server, giving us easy access to LVM volumes across the whole set of machines. I haven''t tried it, but the csnap writeable snapshot driver looks worth investigation too -- its design is rather more reassuring than lvm2 snap. On a separate point, has anyone any experience setting up either the GFS / OCFS2 / Lustre cluster file systems? Are they ready for primetime? Thanks, Ian gnbd notes ========= http://sources.redhat.com/cluster/gnbd/ ./configure --kernel_src=..//xeno.bk/linux-2.6.8.1-xen0/ make ARCH=xen server side: gnbd_serv gnbd_export -vce ian-fc2-1 -d /dev/vg/fc2-1 client side: insmod gnbd-kernel/src/gnbd.ko gnbd_import -i server_name The imported device is then available as /dev/gnbd/ian-fc2-1 e.g. disk = [ ''gnbd/ian-fc2-1'',''sda1'',''w'' ] ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Ian Pratt wrote:> In the last couple of days we''ve been playing around with gnbd as > an alternative to iSCSI, as we found that the performance of the > current cisco linux iSCSI implementation was fairly awful talking > to our NetApp hardware target.Cool, need to try this as an alternative to the ''iscsitarget'' from sourceforge, even though that also works fairly well, serving disks from dom0. What kind of performance improvement did you experience? This is not just due to the NetApp filer being on a separate network with a router or firewall in between (if I recall correctly)?> I haven''t tried it, but the csnap writeable snapshot driver looks > worth investigation too -- its design is rather more reassuring > than lvm2 snap.Perhaps it is better to have the writable/client-specific parts of your root filesystem (/tmp, /var/tmp, perhaps /etc) mounted via NFS (or something else, or just as symlinks to a separate device) on top of a read-only generalized rootfs (like the debian diskless packages used to do), rather than trying to handle this at the block-level. It seems to me all sorts of bad stuff can happen with a writable block-level overlay, for instance if you try to upgrade the filesystem underneath. Jacob ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> What kind of performance improvement did you experience? This is not > just due to the NetApp filer being on a separate network with a router > or firewall in between (if I recall correctly)?With gnbd we were getting sequential read performance equivalent to native disk performance 40MB/s (though with more CPU burn). With Linux 2.4 and linux-iscsi-3.6.1 talking to our NetApp filer we were seeing around 10MB/s, as I recall. It''s not a fair comparison as we don''t know what else was loading the filer at the time. The NetApp probably isn''t optimised for iscsi anyhow (it''s a great NFS/CIFS server). I haven''t investigated the level of CRC etc protection offered by gnbd vs iscsi, but I doubt gnbd is so sophisticated. It seems to work pretty well, though, and is easy to set up. [Just to follow up to my previous message, when building gnbd various binaries failed to build due to not having the magma headers/libraries installed. I just did a ''make -i'' to ignore the errors, and ended up with a working system providing you use the ''-c'' option to gnbd_export. The magma stuff is to do with cluster monitoring.]> > I haven''t tried it, but the csnap writeable snapshot driver looks > > worth investigation too -- its design is rather more reassuring > > than lvm2 snap. > > Perhaps it is better to have the writable/client-specific parts of your > root filesystem (/tmp, /var/tmp, perhaps /etc) mounted via NFS (or > something else, or just as symlinks to a separate device) on top of a > read-only generalized rootfs (like the debian diskless packages used to > do), rather than trying to handle this at the block-level. It seems to > me all sorts of bad stuff can happen with a writable block-level > overlay, for instance if you try to upgrade the filesystem underneath.If only there was a decent file system-level CoW/overlay/union/stackable file system for linux... There are a whole bunch of implementations, but none of them seem particularly well supported. I don''t know of any that exist for 2.6. Does anyone on the list? We have one that works as a user-space NFS server, but "lightning fast" is not how I''d describe it... Ian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel