I''m currently running xen 3.4 on sles11. I currently use ocfs2, but I''m going to move away from it because of issues we''ve been having. Instead, we''ll be assigning LUNs from our san to the xen servers and domU''s will essentially have their own disk. This puts a lot more space requirements in play. With that in mind, I want to shrink a lot of our domUs. I''ve experimented with windows and been fairly successful at backing up the windows domU and restoring it with clonezilla, including shrinking the drives by restoring to smaller disks. I have not had much luck converting my linux (sles) domUs. Any suggestions for making this happen? sparse file based linux(sles) domU --> shrink --> block device based linux(sles) domU Any suggestions? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Pifer wrote:>sparse file based linux(sles) domU --> shrink --> block device based >linux(sles) domU > >Any suggestions?As long as you can mount the sparse file on Dom0 then it''s trivial - just mount the source and destination volumes, and copy the files across. My preferred method is rsync, but there are plenty of options. For rsync, the syntax would be "rsync -avH --numeric-ids /source/ /dest/" You can convert mountpoints as well this way, eg if you previously had /home as a separate volume and want to combine it with /, then just mount all the volumes are required before doing the copy. Eg, mount the source / on /source, and the source /home on /source/home. If you can''t mount the source volume(s) on Dom0, then add the destination volume to DomU and copy the files there. In either case, you are simply sizing the new volumes as you make them and putting the file on later. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>>> On 2010/04/22 at 10:52, James Pifer <jep@obrien-pifer.com> wrote: > I''m currently running xen 3.4 on sles11. I currently use ocfs2, but I''m > going to move away from it because of issues we''ve been having. Instead, > we''ll be assigning LUNs from our san to the xen servers and domU''s will > essentially have their own disk. This puts a lot more space requirements > in play. With that in mind, I want to shrink a lot of our domUs. >Two questions: 1) What issues are you having with OCFS2? 2) Does your SAN support "thin provisioning?" 3) Why is switching from OCFS2 to raw disks per domU taking up any more space? OCFS2 does not allow for "sparse" disks, which means you should see similar disk requirements between OCFS2 and "thick provisioning" on your SAN. -Nick -------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2010-04-22 at 13:09 -0600, Nick Couchman wrote:> >>> On 2010/04/22 at 10:52, James Pifer <jep@obrien-pifer.com> wrote: > > I''m currently running xen 3.4 on sles11. I currently use ocfs2, but I''m > > going to move away from it because of issues we''ve been having. Instead, > > we''ll be assigning LUNs from our san to the xen servers and domU''s will > > essentially have their own disk. This puts a lot more space requirements > > in play. With that in mind, I want to shrink a lot of our domUs. > > > > Two questions: > 1) What issues are you having with OCFS2? > 2) Does your SAN support "thin provisioning?" > 3) Why is switching from OCFS2 to raw disks per domU taking up any > more space? OCFS2 does not allow for "sparse" disks, which means you > should see similar disk requirements between OCFS2 and "thick > provisioning" on your SAN. > > -Nick > > > > -------- > This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR.OCFS2 does not allow sparse disks? I''ve never heard of that. I have had tickets open with novell and they have never said it''s not supported. I''ve emailed this list saying we''re using ocfs2 and sparse files and nobody has raised any flags before (which could be understandable being a mailing list). We are currently doing sparse files on ocfs2. It was going fine for about a year, then we started having problems with ocfs2 becoming corrupted. I don''t know if the SAN supported thin provisioning, but I''m also not sure what you mean. Can you elaborate a little? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2010-04-22 at 13:09 -0600, Nick Couchman wrote:> >>> On 2010/04/22 at 10:52, James Pifer <jep@obrien-pifer.com> wrote: > > I''m currently running xen 3.4 on sles11. I currently use ocfs2, but I''m > > going to move away from it because of issues we''ve been having. Instead, > > we''ll be assigning LUNs from our san to the xen servers and domU''s will > > essentially have their own disk. This puts a lot more space requirements > > in play. With that in mind, I want to shrink a lot of our domUs. > > > > Two questions: > 1) What issues are you having with OCFS2? > 2) Does your SAN support "thin provisioning?" > 3) Why is switching from OCFS2 to raw disks per domU taking up any > more space? OCFS2 does not allow for "sparse" disks, which means you > should see similar disk requirements between OCFS2 and "thick > provisioning" on your SAN. > > -Nick >I checked with my contact from Novell on one of the tickets I have open. His response was that it is supported as long as you are on current code, because they did have issues in the past. Of course he did not say when the support started. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > OCFS2 does not allow sparse disks? I've never heard of that. I havehad> tickets open with novell and they have never said it's notsupported.> I've emailed this list saying we're using ocfs2 and sparse files and > nobody has raised any flags before (which could be understandablebeing> a mailing list).Perhaps OCFS2 on SLES11 does support sparse files. On SLES10, OCFS2 definitely does not support sparse files - if you try to create a sparse disk file, either via virt-manager or using a command like "dd if=/dev/zero of=disk0 bs=1G seek=10", it actually takes up 10 GB of space on the filesystem and disk, not the 10MB or so that you would expect from a sparse disk file.> > We are currently doing sparse files on ocfs2. It was going fine for > about a year, then we started having problems with ocfs2 becoming > corrupted.I'm using SLES10 and I do not have any trouble with OCFS2 corruption. Not quite as new a version of Xen, but it's rock solid.> > I don't know if the SAN supported thin provisioning, but I'm alsonot> sure what you mean. Can you elaborate a little? >Thin provisioning is a similar concept to sparse disks, but at the LUN level. It basically allows you to create a 100GB LUN, for example, but only have it consume the space on your physical disks that is actually used on the volume. This allows you to over-allocate SAN disk space, using only what you're currently storing but allowing for future growth without the need to expand the LUN every time you want to grow. We use Compellent for our SAN, which features thin provisioning, and I'm also experimenting with using OpenSolaris + ZFS + COMSTAR in a shared storage role, and ZFS supports thin provisioning of ZFS volumes. Obviously thin provisioning (and sparse files, for that matter) come with some risk: if you're over-allocating storage space, you risk filling up your physical disks completely, which is never a good thing. But, if you manage your environment properly, you'll have enough warning and you'll be able to add more storage space before you hit that threshold. -Nick -------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Pifer wrote:>I don''t know if the SAN supported thin provisioning, but I''m also not >sure what you mean. Can you elaborate a little?It''s like using a sparse file. You tell the SAN what size of volume you want it it appears to give it to you. What it''s actually doing is using a sparse file so it can overcommit on space and rely on the clients not all filling their disks. It means you can make all the volumes a good size with plenty of margin, but without wasting all the space that would add up to. Dunno what happens if too many of the guests start filling the space up though. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2010-04-22 at 14:25 -0600, Nick Couchman wrote:> > > > > > OCFS2 does not allow sparse disks? I''ve never heard of that. I have > had > > tickets open with novell and they have never said it''s not > supported. > > I''ve emailed this list saying we''re using ocfs2 and sparse files and > > nobody has raised any flags before (which could be understandable > being > > a mailing list). > > Perhaps OCFS2 on SLES11 does support sparse files. On SLES10, OCFS2 > definitely does not support sparse files - if you try to create a sparse > disk file, either via virt-manager or using a command like "dd > if=/dev/zero of=disk0 bs=1G seek=10", it actually takes up 10 GB of > space on the filesystem and disk, not the 10MB or so that you would > expect from a sparse disk file.That has not been my experience. I started with sles10 using ocfs2 with sparse files, and they were/are definitely sparse. I still have a couple of the sles10 boxes running. I started with a 300 gb ocfs2 clustered between a couple xen servers and had a couple dozen domUs. I had to add a second 300 gb ocfs2 volume as my domUs approached 30. My linux domUs were logically 50gb. My total logical disk space being used on those two 300 gb volumes was 1.3 tb. I would use the du -B1 -s commands to verify. I only know all of this because of all the tracking/moving/recovering I had to do when problems started. I also started tracking sizes in planning to go away from ocfs2 and away from spares files. Either way, since I''ve moved most of my domUs off of ocfs2 my environment has been much more stable, so I''m almost definitely getting away from ocfs2 and sparse files. Thanks for the thin provisioning explanation on the SAN. I don''t believe our SAN supports it. Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > That has not been my experience. I started with sles10 using ocfs2with> sparse files, and they were/are definitely sparse. I still have acouple> of the sles10 boxes running. I started with a 300 gb ocfs2 clustered > between a couple xen servers and had a couple dozen domUs. I had toadd> a second 300 gb ocfs2 volume as my domUs approached 30. My linuxdomUs> were logically 50gb. My total logical disk space being used on thosetwo> 300 gb volumes was 1.3 tb. I would use the du -B1 -s commands to > verify.Interesting...I'll have to try, again. Maybe it changed mid-way through the SLES10 versions or something like that.> > I only know all of this because of all the tracking/moving/recoveringI> had to do when problems started. I also started tracking sizes in > planning to go away from ocfs2 and away from spares files.Again, I've had none of these issues. I have four production machines accessing the same OCFS2 volume and have not ever had an issue with corruption. My SAN is FC-attached to these hosts, but I have four more hosts that I use for development and desktop support that are iSCSI-attached to an Openfiler SAN, and I've not had any issues with the iSCSI-attached OCFS2 volume, either.> > Either way, since I've moved most of my domUs off of ocfs2 my > environment has been much more stable, so I'm almost definitelygetting> away from ocfs2 and sparse files.Interesting how different those experiences can be! -Nick -------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Interesting...I''ll have to try, again. Maybe it changed mid-way > through the SLES10 versions or something like that.Hey Nick. I just did this on a stock sles11 system using ocfs2. My dd command was slightly different than the one you wrote. Not sure how sles''s virt-manager creates the disk. # dd if=/dev/zero of=disk0.test bs=1 count=1 seek=100G 1+0 records in 1+0 records out 1 byte (1 B) copied, 6.0127e-05 s, 16.6 kB/s # du -B1 -s disk0.test 4096 disk0.test # du -B1 -s --apparent-size disk0.test 107374182401 disk0.test James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Hey Nick. I just did this on a stock sles11 system using ocfs2. My dd > command was slightly different than the one you wrote. Not sure how > sles''s virt-manager creates the disk. > > # dd if=/dev/zero of=disk0.test bs=1 count=1 seek=100G > 1+0 records in > 1+0 records out > 1 byte (1 B) copied, 6.0127e-05 s, 16.6 kB/s > > # du -B1 -s disk0.test > 4096 disk0.test > > # du -B1 -s --apparent-size disk0.test > 107374182401 disk0.test > > JamesI''ll have to give it a shot on my SLES10 systems and see what happens... -Nick -------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users