david.corral@silicetelecom.com
2008-Mar-14 10:18 UTC
[Xen-users] Save/Restore/Reboot FS Corruption
I''ve been looking for information about this matter, but didn''t found too much. Saving and restoring works pretty good, but the problem comes when I reboot after restoring. FileSystem becomes corrupt after the restoration, so it enters "read only" status. Can''t provide more info, since I''m far from the computer, only thing I can say, its a Debian Etch debootstraped VM, with a ext3-fs. The disk image of the VM is stored on a remote storage device, so the directory containing the image is mounted via NFS. Is anyone else experiencing this issue? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hrrrmm. Sounds nasty :-(> I''ve been looking for information about this matter, but didn''t found > too much. > > Saving and restoring works pretty good, but the problem comes when I > reboot after > restoring. FileSystem becomes corrupt after the restoration, so it > enters "read only" > status. > > Can''t provide more info, since I''m far from the computer, only thing I > can say, its a Debian Etch debootstraped VM, with a ext3-fs. > The disk image of the VM is stored on a remote storage device, so the > directory containing the image is mounted via NFS. > > Is anyone else experiencing this issue?What was the sequence of operations that you went through when this happened? Something like: xm save <domain> ... time passes ... xm restore <domain> ... reboot domain ... ... corruption! Is it possible that anything modified the domain''s virtual disks during the "... time passes ..." section above? Like maybe somebody mounted a filesystem in the virtual disk from dom0 for some reason? Or if somebody accidentally did an xm create in the meantime? Domains will, unfortunately, typically corrupt their virtual disks if their filesystem changes whilst they''re suspended. You might not notice this until the filesystem gets mounted at reboot time. Linux doesn''t generally expect data to change on a disk it thinks it has mounted - even if it''s actually xm saved at the time - and it can get confused if changes do occur. Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Is it possible that anything modified the domain''s virtual disksduring> the "... time passes ..." section above? Like maybe somebody mounteda> filesystem in the virtual disk from dom0 for some reason? Or ifsomebody> accidentally did an xm create in the meantime? >Just thinking out loud here, but how much work do you think it would be to stop this happening for at least lvm backed devices? lvchange has the ability to change an lv to not-available (basically offline), or to read-only. Assuming these changes are persistent across reboots of Dom0, an ''xm save'' could set the lv to not-available after the save is complete, and resume could check that it is in a not-available state before changing it back to available and starting the domain again. That would make sure nobody has inadvertently touched the lv in the meantime. An ''xm create'' would fail because the lv was not available, as would any attempt to mount it. ''xm create'' would need to have a ''force'' option to set the lv back to available again. For file backed devices, you could just rename the backing file on save and rename it back again on resume. For physical disk backed devices I don''t think there is such an easy solution... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users