I''m toying with iscsi and raid at the moment, the idea is to have 2 physical machines with xen domains spread between them. Both run iscsi and each domain uses a mirrored set of an iscsi volume from each physical server. If a server needs to be shut down, domains can be migrated to the other server. If a server crashes (eg melts down to a puddle on the floor) then the domains it was hosting can be re-started on the other server. This is the theory anyway. Currently my environment consists of lvm volumes on Xen0, with iscsi exporting them, which are then imported into the xenU domains and raid1''d there. It would be nice to have a single raid1 volume in xenU which is sliced and diced via lvm, but then i''ve got lvm + iscsi + raid1 + lvm, which can''t be good for performance. Performance isn''t a really high concern here but still... the idea of running lvm in xen0 is to be able to resize volumes with comparative ease. Any comments or suggestions? thanks James
> Currently my environment consists of lvm volumes on Xen0, with iscsi exporting them, which are then imported into the xenU domains and raid1''d there. It would be nice to have a single raid1 volume in xenU which is sliced and diced via lvm, but then i''ve got lvm + iscsi + raid1 + lvm, which can''t be good for performance. Performance isn''t a really high concern here but still... > > the idea of running lvm in xen0 is to be able to resize volumes with comparative ease. > > Any comments or suggestions?I haven''t measured it, but I see no reason why LVM should have much of an impact on block-device performance. It''s only doing block address translation on the data path. -- Keir ------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
I can still saturate 100Mbps here with ide+BBR+raid5+lvm+snapshot over NFSv3 with no discernable overhead beyond the normal IDE stuff. Haven''g gotten iSCSI up yet. My domains run on xen machines that then hook up to the two nfs servers. I''ll be adding mirror later for same purpose as your''s, redundant disk in case of total iscsi server loss. On Sun, 2004-08-29 at 07:33, Keir Fraser wrote:> > Currently my environment consists of lvm volumes on Xen0, with iscsi exporting them, which are then imported into the xenU domains and raid1''d there. It would be nice to have a single raid1 volume in xenU which is sliced and diced via lvm, but then i''ve got lvm + iscsi + raid1 + lvm, which can''t be good for performance. Performance isn''t a really high concern here but still... > > > > the idea of running lvm in xen0 is to be able to resize volumes with comparative ease. > > > > Any comments or suggestions? > > I haven''t measured it, but I see no reason why LVM should have much of > an impact on block-device performance. It''s only doing block address > translation on the data path. > > -- Keir > > > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel