I''m still having some struggles with ocfs2 and currently have a case open with Novell. Can someone suggest other options for shared storage using SLES10/11 other than ocfs2? Background: -My storage is coming from a Xiotech Magnitude 4000 3D SAN connected with fiber (QLogic cards) -currently (for ocfs2) the same lun is assigned to multiple servers. -I''m using file based disks for VMs. My preference would be to keep doing this way -I know NFS is an option, but seems that it would add too many points of failure, and losing nfs would take every VM down. plus I''m not sure how performance would be. What other shared storage options do I have? Is there a good article/wiki that might explain the different storage options in relation to xen? Help is appreciated. Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi James, Am Mittwoch, den 03.03.2010, 14:28 -0500 schrieb James Pifer:> I''m still having some struggles with ocfs2 and currently have a case > open with Novell. Can someone suggest other options for shared storage > using SLES10/11 other than ocfs2? > > Background: > -My storage is coming from a Xiotech Magnitude 4000 3D SAN connected > with fiber (QLogic cards)ok> -currently (for ocfs2) the same lun is assigned to multiple servers.ok> -I''m using file based disks for VMs. My preference would be to keep > doing this waySo you need to use a cluster-FS. ocfs2, gfs and so on are the way to go.> -I know NFS is an option, but seems that it would add too many points of > failure, and losing nfs would take every VM down. plus I''m not sure how > performance would be. > > What other shared storage options do I have?If you have a shared (FC)-LUN between your server i would suggest to use LVM in this scenario. Just beware of using LVM-write-cache, which should be disabled, because of delayed writes, which cause failures in HA-Failover or Live-Migration.> Is there a good article/wiki that might explain the different storage options in > relation to xen?pvcreate /dev/sdb1 (where sdb1 is a partition on this FC-LUN) vgcreate data /dev/sdb1 use phy:/dev/data/$LUN-VM1 in xen ;-) that''s all you need :-D> > Help is appreciated. > > Thanks, > Jameshth, thomas _______________________________________________> Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > -I''m using file based disks for VMs. My preference would be to keep > > doing this way > > So you need to use a cluster-FS. ocfs2, gfs and so on are the way to go. >Can gfs be used on sles?> > -I know NFS is an option, but seems that it would add too many points of > > failure, and losing nfs would take every VM down. plus I''m not sure how > > performance would be. > > > > What other shared storage options do I have? > > If you have a shared (FC)-LUN between your server i would suggest to use > LVM in this scenario. > > Just beware of using LVM-write-cache, which should be disabled, because > of delayed writes, which cause failures in HA-Failover or > Live-Migration.So, each server gets the same LUN just like I do now. I create the LVM as you suggest. So each server will see that lvm? Would I also rely on xen''s locking mechanisms for making sure two servers don''t try to access the same VM disk files? You lost me with this line, where do I use this at? use phy:/dev/data/$LUN-VM1 in xen ;-) Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, 2010-03-03 at 16:07 -0500, James Pifer wrote:> > > -I''m using file based disks for VMs. My preference would be to keep > > > doing this way > > > > So you need to use a cluster-FS. ocfs2, gfs and so on are the way to go. > > > > Can gfs be used on sles?Found my answer to this and that would be No. Novell feels ocfs2 is superior, unfortunately as of sles11 they now sell it as a separate product. Nice... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Mar 03, 2010 at 04:25:43PM -0500, James Pifer wrote:> On Wed, 2010-03-03 at 16:07 -0500, James Pifer wrote: > > > > -I''m using file based disks for VMs. My preference would be to keep > > > > doing this way > > > > > > So you need to use a cluster-FS. ocfs2, gfs and so on are the way to go. > > > > > > > Can gfs be used on sles? > > Found my answer to this and that would be No. Novell feels ocfs2 is > superior, unfortunately as of sles11 they now sell it as a separate > product. Nice...Are your file backed domUs are on a SAN/NAS? If not, where do the disks that make up your cluster reside? You can cluster LVM, though you loose snapshotting, and NFS is also worth considering depending on how the shares are exported e.g. via a NAS where hardware can be more easily hot-swapped. Migration to NFS would be easy as well since you could just mount your files, copy the contents to an NFS share, and restart your domU. Jamon _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Are your file backed domUs are on a SAN/NAS? If not, where do the disks > that make up your cluster reside? You can cluster LVM, though you loose > snapshotting, and NFS is also worth considering depending on how the > shares are exported e.g. via a NAS where hardware can be more easily > hot-swapped. Migration to NFS would be easy as well since you could just > mount your files, copy the contents to an NFS share, and restart your domU. > > JamonYes, file based domU''s are currently on ocfs2 on a SAN. I don''t do any snapshotting right now, but that''s not to say I won''t want to some day. My goal is to get to a point where things are stable and I can run something to manager everything, ie Orchestrate, Convirture, something to manage restarting/migrating domU''s when one of the servers has problems. Or I just need to have a server down for maintenance. So in general terms, how would I setup LVM (clvm)? Let''s say I have two servers (in this case running sles11). Each server has the SAME vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let''s say it''s a 400gb vdisk. If I add additional servers, they too would be assigned the same vdisk. Similarly, I could add additional vdisks when more storage is required. Anyway, on the first server I setup LVM. Somehow the second server would also see that as lvm and be able to mount it? I will search for some documentation on setting up clvm, but my questions are: 1) Is this where I would/could use clvm? 2) On each server would they see the logical volume at a mount point? 3) Would I need to use xen''s built in locking mechanisms or is there some builtin locking like ocfs2 has? The only way I''m familiar with nfs is by running nfs on a server and then other servers mounting nfs from that server. Are you saying that the SAN could somehow directly provide nfs? Sorry if some of my questions don''t make sense. Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi James, Am Mittwoch, den 03.03.2010, 17:34 -0500 schrieb James Pifer:> > Are your file backed domUs are on a SAN/NAS? If not, where do the disks > > that make up your cluster reside? You can cluster LVM, though you loose > > snapshotting, and NFS is also worth considering depending on how the > > shares are exported e.g. via a NAS where hardware can be more easily > > hot-swapped. Migration to NFS would be easy as well since you could just > > mount your files, copy the contents to an NFS share, and restart your domU. > > > > Jamon > > > Yes, file based domU''s are currently on ocfs2 on a SAN. > > I don''t do any snapshotting right now, but that''s not to say I won''t > want to some day.Yes, but you _want_ to be able snapshot your domUs, because of "Hot-backup" of running domUs.> My goal is to get to a point where things are stable > and I can run something to manager everything, ie Orchestrate, > Convirture, something to manage restarting/migrating domU''s when one of > the servers has problems. Or I just need to have a server down for > maintenance.You''re searching for openQRM ;-)> So in general terms, how would I setup LVM (clvm)? Let''s say I have two > servers (in this case running sles11). Each server has the SAME > vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let''s say > it''s a 400gb vdisk. If I add additional servers, they too would be > assigned the same vdisk. Similarly, I could add additional vdisks when > more storage is required.You have to care that _EVERY_ dom0 is "seeing" the Storage-LUN! If your hosts are able to connect to this storage ALL Hosts should be able to use this Volumes (LVols)> > Anyway, on the first server I setup LVM. Somehow the second server would > also see that as lvm and be able to mount it?pvscan vgscan vgchange -ay should be enough that ALL Hosts could start the domUs residing in this LUN / VG.... Just put something like this in a init-script....> I will search for some documentation on setting up clvm, but my > questions are: > > 1) Is this where I would/could use clvm?yes, but if you use a Management-Tool like openQRM, ConVirt or whatever the Management-Solution takes care of starting/stopping DomUs, so Locking is not really necessary..> 2) On each server would they see the logical volume at a mount point?yes> 3) Would I need to use xen''s built in locking mechanisms or is there > some builtin locking like ocfs2 has?Nope. You have just to take care, that you dont start your DomUs twice. So just use a Management-Solution, which takes care of this.> The only way I''m familiar with nfs is by running nfs on a server and > then other servers mounting nfs from that server. Are you saying that > the SAN could somehow directly provide nfs?e.g. NXXAPP provide direct NFS-Access, ZFS-based-Solution like NexentaStore, too... and so on It depends of the Vendor, which storage-technologies are supported and available...> Sorry if some of my questions don''t make sense. > Thanks, > Jameshth, thomas _______________________________________________> Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > So in general terms, how would I setup LVM (clvm)? Let''s say I havetwo> servers (in this case running sles11). Each server has the SAME > vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let''s say > it''s a 400gb vdisk. If I add additional servers, they too would be > assigned the same vdisk. Similarly, I could add additional vdisks when > more storage is required.Each node in your cluster must be able to access the storage exported from your SAN. You''d create a CLVM much like an LVM. First configure your cluster, then create a volume group on the SAN-exported device. Once created you''d start clvmd and run a "vgchange -c y" on the volume group to promote it to a clustered volume group.> Anyway, on the first server I setup LVM. Somehow the second serverwould> also see that as lvm and be able to mount it?When a volume group is clustered, all nodes participating in your cluster will see this volume group (e.g. in a "vgs" command) and any changes in this volume group (creating/deleting logical volumes, etc.) will be synchronized to all nodes in the cluster.> I will search for some documentation on setting up clvm, but my > questions are: > > 1) Is this where I would/could use clvm?We use it to carve out raw storage from our SAN and export logical volumes to domU''s as a block device. This is a good use of CLVM. You can use it with a clustered file system (GFS) or by itself.> 2) On each server would they see the logical volume at a mount point?Each node would see each logical volume in the clustered volume group as a block device. That block device can be used raw, exported to a domU (e.g. via phy:) or used for a filesystem. A standard filesystem created within such a logical volume can only be mounted on one node at a time. To mount on multiple nodes at once, you need a clustered filesystem to ensure file metadata is synchronized amongst all cluster nodes.> 3) Would I need to use xen''s built in locking mechanisms or is there > some builtin locking like ocfs2 has?CLVM has its own locking protocol based on DLM. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Yes, file based domU''s are currently on ocfs2 on a SAN. > > > > I don''t do any snapshotting right now, but that''s not to say I won''t > > want to some day. > > Yes, but you _want_ to be able snapshot your domUs, because of > "Hot-backup" of running domUs.This is a bit off the topic of my original post, but can you elaborate just a little on this? How do you use snapshot for hot-backup?> > > My goal is to get to a point where things are stable > > and I can run something to manager everything, ie Orchestrate, > > Convirture, something to manage restarting/migrating domU''s when one of > > the servers has problems. Or I just need to have a server down for > > maintenance. > > You''re searching for openQRM ;-)Thanks for this suggestion. Glanced at it briefly and will definitely look at it some more. Have you looked at convirt? I liked how that looked, but HA features appear to cost extra.> > > So in general terms, how would I setup LVM (clvm)? Let''s say I have two > > servers (in this case running sles11). Each server has the SAME > > vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let''s say > > it''s a 400gb vdisk. If I add additional servers, they too would be > > assigned the same vdisk. Similarly, I could add additional vdisks when > > more storage is required. > > You have to care that _EVERY_ dom0 is "seeing" the Storage-LUN! > > If your hosts are able to connect to this storage ALL Hosts should be > able to use this Volumes (LVols) > > > > > Anyway, on the first server I setup LVM. Somehow the second server would > > also see that as lvm and be able to mount it? > > pvscan > vgscan > vgchange -ay > > should be enough that ALL Hosts could start the domUs residing in this > LUN / VG....Ok, I may try this tomorrow, but just to clarify, doing this does NOT allow me to use snapshotting? Why is that? So if I have the same 400gb device(LUN) assigned to each server. I create a logical volume I create a file system on that logical volume Mount that on each server using the same name for the mount point. So essentially each server would see /data which is the file system on the LVM. Under /data/images I could store my file based domUs. That''s essentially what I''m doing right now on ocfs2. Does snapshotting work on ocfs2? If so, how are they different in terms of snapshotting? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Mittwoch, den 03.03.2010, 18:44 -0500 schrieb James Pifer:> > > Yes, file based domU''s are currently on ocfs2 on a SAN. > > > > > > I don''t do any snapshotting right now, but that''s not to say I won''t > > > want to some day. > > > > Yes, but you _want_ to be able snapshot your domUs, because of > > "Hot-backup" of running domUs. > > This is a bit off the topic of my original post, but can you elaborate > just a little on this? How do you use snapshot for hot-backup?i wrote a script that does this job... this script does: rotate old backups and create a snapshot of a running domU # lvcreate -L 10G -name backup /dev/data/domu-disk mount that snapshot # mount /dev/data/backup rsync the FS to backup-location # rsync -avzH --numeric-ids -e ssh user@server backup-location holds 6 daily, 4 weekly and 3 monthly backups of each domu.... the latest backup is also going to tape for archiving....> > > > > My goal is to get to a point where things are stable > > > and I can run something to manager everything, ie Orchestrate, > > > Convirture, something to manage restarting/migrating domU''s when one of > > > the servers has problems. Or I just need to have a server down for > > > maintenance. > > > > You''re searching for openQRM ;-) > > Thanks for this suggestion. Glanced at it briefly and will definitely > look at it some more. Have you looked at convirt? I liked how that > looked, but HA features appear to cost extra.openQRM is gpl ;-)> > > > > So in general terms, how would I setup LVM (clvm)? Let''s say I have two > > > servers (in this case running sles11). Each server has the SAME > > > vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let''s say > > > it''s a 400gb vdisk. If I add additional servers, they too would be > > > assigned the same vdisk. Similarly, I could add additional vdisks when > > > more storage is required. > > > > You have to care that _EVERY_ dom0 is "seeing" the Storage-LUN! > > > > If your hosts are able to connect to this storage ALL Hosts should be > > able to use this Volumes (LVols) > > > > > > > > Anyway, on the first server I setup LVM. Somehow the second server would > > > also see that as lvm and be able to mount it? > > > > pvscan > > vgscan > > vgchange -ay > > > > should be enough that ALL Hosts could start the domUs residing in this > > LUN / VG.... > > Ok, I may try this tomorrow, but just to clarify, doing this does NOT > allow me to use snapshotting? Why is that?i suggest you to use LVM and NOT CLVM. CLVM does not support snapshotting at all. For Backup its nice to use one XEN-Host as Backup-Machine, which runs the backup through lvm-snapshotting.> > So if I have the same 400gb device(LUN) assigned to each server. > I create a logical volumeLVM is a _Volume Manager_. LVM virtualises your disk(s) in Volume-Groups and allows to dynamically assign Disk-Space as Logical-Volume. Each VOlume can hold its own Filesystem. so you create as much LVols as domU. in my case grep phy /etc/xen/domu.cfg ''phy:/dev/data/domu-swap,sda1,w'', ''phy:/dev/data/domu-disk,sda2,w'',> I create a file system on that logical volume > Mount that on each server using the same name for the mount point.No - Each Server is able to see each disk.> > So essentially each server would see /data which is the file system on > the LVM. Under /data/images I could store my file based domUs.LVM is _disk-based_ and not usable as filebacked-Store for XEN, but its smarter to use LVM ;-)> That''s > essentially what I''m doing right now on ocfs2. Does snapshotting work on > ocfs2? If so, how are they different in terms of snapshotting? > > Thanks, > Jameshth, thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Halinka
2010-Mar-04 00:21 UTC
Corrected: [Xen-users] xen storage options - plase advise
Sorry, its late in Germany Am Donnerstag, den 04.03.2010, 01:16 +0100 schrieb Thomas Halinka:> Am Mittwoch, den 03.03.2010, 18:44 -0500 schrieb James Pifer:...> i wrote a script that does this job... > > this script does: > > rotate old backups and create a snapshot of a running domU > > # lvcreate -L 10G -name backup /dev/data/domu-disk^ # lvcreate -L10G -s -name backup /dev/data/domu-disk cu, thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> LVM is _disk-based_ and not usable as filebacked-Store for XEN, but its > smarter to use LVM ;-)Thomas, Thanks for all the information so far, but of course I have more questions. I just want to make sure I understand it. Let me give you an example. Let''s say I have five Windows 2008 servers. Right now they are using file based, growable storage. They each have a disk0 that is growable and partitioned like: c: = 25gb e: = 175gb (using round numbers) I understand the danger of using growable storage and having your storage fill up, but instead of these five machines using 200gb each, or one TB, they are currently only using 10-20gb each so far. (the sizes were spec''ed by the project, not my choice). Anyway, how would these work in LVM? Would I have to actually allocate 200gb for each of these on the physical disk? I also have linux pv domUs that have 40 or 50gb disks. They don''t need to be that big, but is the way they were created. I currently have three dozen domUs which are a mix of linux (sles) and windows. Having to actually allocate the disk space that they are virtually using now would be a lot. Thanks for the example scripts. I would really like to be able to do something like that. Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, March 4, 2010 12:38 am, James Pifer wrote:>> LVM is _disk-based_ and not usable as filebacked-Store for XEN, but its >> smarter to use LVM ;-) > > Thomas, > > Thanks for all the information so far, but of course I have more > questions. I just want to make sure I understand it. > > Let me give you an example. Let''s say I have five Windows 2008 servers. > Right now they are using file based, growable storage. > > They each have a disk0 that is growable and partitioned like: > c: = 25gb > e: = 175gb (using round numbers) > > I understand the danger of using growable storage and having your > storage fill up, but instead of these five machines using 200gb each, or > one TB, they are currently only using 10-20gb each so far. (the sizes > were spec''ed by the project, not my choice). > > Anyway, how would these work in LVM? Would I have to actually allocate > 200gb for each of these on the physical disk? > > > I also have linux pv domUs that have 40 or 50gb disks. They don''t need > to be that big, but is the way they were created. I currently have three > dozen domUs which are a mix of linux (sles) and windows. Having to > actually allocate the disk space that they are virtually using now would > be a lot. > > > Thanks for the example scripts. I would really like to be able to do > something like that.James, I had similar questions a while ago and back then the consensus was that snapshotting a running domU does *not* provide a consistent backup. This is especially the case with databases and applications which do not immediately flush writes to disk. I believe XenServer may do a better job, but I am unsure on this. Basically, the only way to do a proper backup is to properly shutdown the domU before taking the snapshot, thus ensuring there is no chance of losing data. For some people not doing this is an acceptable risk and certainly better than no backup at all. With regard to storage, the industry buzzword for what you describe is ''thin provisioning''. This, together with the de-duplication features offered by some SANs makes for very efficient use of storage space. If I do this with Solaris for example, it shouldn''t really matter if I use a file system or LVM on the exported volume, because all I am doing is telling ZFS to export a zvol and the dom0 sees this as block-level storage. I might set the maximum size of the volume to much more than the space currently available and as it fills up I just need to ensure I keep pace and add more physical storage as needed. Your SAN product may well allow you to do the same thing. The advantage of LVM is that you can later add further space with minimal hassle. If you need to use CLVM and lose LVM snapshots, could you take snapshots directly from your SAN instead? Hope this helps, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> James, > > I had similar questions a while ago and back then the consensus was that > snapshotting a running domU does *not* provide a consistent backup. This > is especially the case with databases and applications which do not > immediately flush writes to disk. I believe XenServer may do a better > job, but I am unsure on this. > > Basically, the only way to do a proper backup is to properly shutdown the > domU before taking the snapshot, thus ensuring there is no chance of > losing data. For some people not doing this is an acceptable risk and > certainly better than no backup at all. > > With regard to storage, the industry buzzword for what you describe is > ''thin provisioning''. This, together with the de-duplication features > offered by some SANs makes for very efficient use of storage space. If I > do this with Solaris for example, it shouldn''t really matter if I use a > file system or LVM on the exported volume, because all I am doing is > telling ZFS to export a zvol and the dom0 sees this as block-level > storage. I might set the maximum size of the volume to much more than the > space currently available and as it fills up I just need to ensure I keep > pace and add more physical storage as needed. Your SAN product may well > allow you to do the same thing. The advantage of LVM is that you can > later add further space with minimal hassle. > > If you need to use CLVM and lose LVM snapshots, could you take snapshots > directly from your SAN instead? > > Hope this helps, > > Matt.I''ve read that snapshotting is not a reliable way of doing backups on this list before. I believe the term was you would end up with a dirty file system. Most of the systems I run, I think snapshotting would probably be ok in a recovery situation, as long as they boot. Up to this point I treat my domUs as standalone machines running backup software. Recovery is not as easy. We do mirror our SAN in case of catastrophic failure, but obviously this is not snapshotting. I''m still left with the same questions about usage of space. Can I use LVM or CLVM as one large storage repository that is mounted on each dom0? Then use this space to store file based domUs? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> i suggest you to use LVM and NOT CLVM. CLVM does not support > snapshotting at all.CLVM itself doesn''t allow snapshotting, but you can still run snapshots from within your domU''s. But keep in mind that running a snapshot from dom0 with a LV that is phy: exported to your domU won''t provide a consistent snapshot anyway, so I don''t see the point in snapshotting from dom0. (domU maintains its own buffer cache, has no idea that a snapshot has been created from dom0; dom0 has no idea what domU is doing, can only see the blocks that have been committed to disk, which is not necessarily consistent. This is roughly equivalent to creating a snapshot on your SAN without first syncing the client.) Use CLVM. If you need snapshots, do them within the domU. John -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Mar 04, 2010 at 09:28:58AM -0500, John Madden wrote:>> i suggest you to use LVM and NOT CLVM. CLVM does not support >> snapshotting at all. > > CLVM itself doesn''t allow snapshotting, but you can still run snapshots > from within your domU''s. > > But keep in mind that running a snapshot from dom0 with a LV that is > phy: exported to your domU won''t provide a consistent snapshot anyway, > so I don''t see the point in snapshotting from dom0. (domU maintains its > own buffer cache, has no idea that a snapshot has been created from > dom0; dom0 has no idea what domU is doing, can only see the blocks that > have been committed to disk, which is not necessarily consistent. This > is roughly equivalent to creating a snapshot on your SAN without first > syncing the client.) > > Use CLVM. If you need snapshots, do them within the domU. >Or use XCP. It can do LVM snapshots in a cluster, with shared storage. XCP doesn''t use CLVM, but it uses other methods to share the LVM volumes across all hosts/nodes in the cluster. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Mar 04, 2010 at 04:55:57PM +0200, Pasi Kärkkäinen wrote:> On Thu, Mar 04, 2010 at 09:28:58AM -0500, John Madden wrote: > >> i suggest you to use LVM and NOT CLVM. CLVM does not support > >> snapshotting at all. > > > > CLVM itself doesn''t allow snapshotting, but you can still run snapshots > > from within your domU''s. > > > > But keep in mind that running a snapshot from dom0 with a LV that is > > phy: exported to your domU won''t provide a consistent snapshot anyway, > > so I don''t see the point in snapshotting from dom0. (domU maintains its > > own buffer cache, has no idea that a snapshot has been created from > > dom0; dom0 has no idea what domU is doing, can only see the blocks that > > have been committed to disk, which is not necessarily consistent. This > > is roughly equivalent to creating a snapshot on your SAN without first > > syncing the client.) > > > > Use CLVM. If you need snapshots, do them within the domU. > > > > Or use XCP. It can do LVM snapshots in a cluster, with shared storage. >Uhm, that was supposed to say "XCP can do VHD snapshots with LVM storage, shared across all cluster nodes". XCP doesn''t use LVM snapshots. -- Pasi> XCP doesn''t use CLVM, but it uses other methods to share the LVM volumes > across all hosts/nodes in the cluster. > > -- Pasi > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2010-03-04 at 09:47 -0500, John Madden wrote:> > I''m still left with the same questions about usage of space. Can I use > > LVM or CLVM as one large storage repository that is mounted on each > > dom0? Then use this space to store file based domUs? > > I think part of the beauty of CLVM is that you wouldn''t have to resort > to file-based domU''s anymore. You should achieve better performance > overall by phy:''ing block devices to your domU''s. CLVM allows you to > pass those LV''s around between your dom0''s in a consistent manner for > migration and such. >Nobody has answered the question of disk space usage. Let''s say I have five Windows 2008 servers. Right now they are using file based, growable storage. They each have a disk0 that is growable and partitioned like: c: = 25gb e: = 175gb (using round numbers) Using clvm, would I be using 1TB of storage for these five domUs? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Nobody has answered the question of disk space usage. Let''s say I have > five Windows 2008 servers. Right now they are using file based, growable > storage. > > They each have a disk0 that is growable and partitioned like: > c: = 25gb > e: = 175gb (using round numbers) > > Using clvm, would I be using 1TB of storage for these five domUs?Of course -- you aren''t gaining anything magic by using one disk technology over another unless you start talking about de-duplication. If you want the benefits of the "auto-growing sparse files" that you''re currently using, consider allocating the amount of storage you actually need and growing it (via lvm) as you go. John -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2010-03-04 at 10:18 -0500, John Madden wrote:> > Nobody has answered the question of disk space usage. Let''s say I have > > five Windows 2008 servers. Right now they are using file based, growable > > storage. > > > > They each have a disk0 that is growable and partitioned like: > > c: = 25gb > > e: = 175gb (using round numbers) > > > > Using clvm, would I be using 1TB of storage for these five domUs? > > Of course -- you aren''t gaining anything magic by using one disk > technology over another unless you start talking about de-duplication. > If you want the benefits of the "auto-growing sparse files" that you''re > currently using, consider allocating the amount of storage you actually > need and growing it (via lvm) as you go. > > John >Ok, so if I want to keep using "auto-growing sparse files" I would do this. 1) Assign the same vdisk from the SAN to multiple servers. 2) Create an LVM on this disk. 3) Each server would run these commands to see the LVM and then mount it: pvscan vgscan vgchange -ay 4) If I need to expand the LVM disk due to growth, we would extend the vdisk, then use LVM to grow the space. Essentially this is giving me a shared storage without any clustering. This would allow me to live migrate domU''s between servers. Is that correct? Am I missing anything? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Uhm, that was supposed to say "XCP can do VHD snapshots with LVM storage, > shared across all cluster nodes". > > XCP doesn''t use LVM snapshots. > > -- Pasi > > > XCP doesn''t use CLVM, but it uses other methods to share the LVM volumes > > across all hosts/nodes in the cluster. > > > > -- Pasi > > > >Pasi, Can I ask you some questions about XCP? I know XCP and XenServer are closely related. I''ve tried XenServer and for the most part I was pretty happy with it, at least for windows domUs. Converting my sles domUs seemed problematic. With xen on sles it''s very easy to create sles domUs, but they are not built with pygrub. With windows domUs I used clonezilla to backup and restore them. This seemed to work fairly well. I tried some of the conversion tools, but they took a lot longer and didn''t work very well. Besides converting, I also have to resize most of these domUs or I''m wasting a ton of disk space. Disk space has been discussed in this thread. In XCP you can''t use any growable sparse file correct? So if I use VLM and start with a certain size shared storage, is it easy to increase or grow the storage? Does XCP include high availability which costs extra on XenServer? One other thing I couldn''t figure out how to do on XenServer was copy an existing VM. Right now if I want to copy one, I just shut it down, copy the disk image, and create a new one based on the image. I couldn''t see how to do the same thing on XenServer. How would you do that on XCP? Maybe I''ll download and try XCP and see what happens. Besides my questions above, any general suggestions are appreciated. Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Essentially this is giving me a shared storage without any clustering. > This would allow me to live migrate domU''s between servers.No, you need CLVM to share the storage. John -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra Giraldez
2010-Mar-10 01:05 UTC
Re: [Xen-users] xen storage options - plase advise
On Thu, Mar 4, 2010 at 10:38 AM, James Pifer <jep@obrien-pifer.com> wrote:> Ok, so if I want to keep using "auto-growing sparse files" I would do > this.just a guess, but i''d bet that your current performance problems are due to using sparse files. In general: - using a filesystem is a little slower than using block devices (with LVM, partitions, raw, whatever). - a cluster filesystem is noticeably slower than a non-cluster filesystem. - sparse files are terribly slower than non-sparse files on any filesystem. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users