I manage a set of CentOS operations workstations which are all clones of each other (3 "live" and 1 "spare" kept powered down); each has a single drive with four partitions (/boot, /, /home, swap). I've already set up cron'd rsync jobs to copy the operations accounts between the workstations on a daily basis, so that when one fails, it is a simple, quick process to swap in the spare, restore the accounts from one of the others, and continue operations. This has been successfully tested in practice on more than one occasion. However, when I perform system updates (about once a month), I like to create a temporary "clone" of the system to an external drive before running the update, so that I can simply swap drives or clone back if something goes horribly wrong. I have been using "CloneZilla" to do this, but it can take a while since it blanks each partition before copying, and requires a system shutdown. Question 1: Would it be sufficient to simply use CloneZilla once to initialize the backup drive (or do it manually, but CloneZilla makes it easy-peasy), and then use "rsync -aHx --delete" (let me know if I missed an important rsync option) to update the clone partitions from then on? I am assuming that the MBR typically doesn't get rewritten during system updates, though "/etc/grub.conf" obviously does get changed. Suppose I want to store more than one workstation on a single drive (easy), and be able to boot into any of the stored configurations (hard). Here's what I thought of: 1) Create a small "master" partition which contains a bootloader (such as a CentOS rescue disk), and a single "swap" partition. 2) Create one partition "set" per workstation (/boot, /, /home, excluding swap). Obviously, these will all likely be logical, and each workstation must use unique labels for mounting partitions. 3) On the "master" partition, modify the bootloader menu to allow one to chainload the /boot partitions for each configuration. (This is the "Voila!" step that I haven't fully figured out.) Question 2: Is there a better way to do the above? How do I perform the "Voila!" step, i.e. what's the right chainload command for this? Also, the chainloaded partitions are logical; is this OK? I also have a single off-site NAS disk which contains clones of all the critical workstations on-site. Most of them are Macs, so I can use sparseimages on the NAS for the clones and get easy-peasy incremental clones. I also do this for the Linux box (backing it up incrementally to an HFS case-sensitive sparseimage via rsync), but it's (obviously) a bit of a kludge. Question 3: Is there a UNIX equivalent to the Mac sparseimage that I should be using for this? ("tar -u" can do it (duh), but then the backup file grows without bound.) Thanks, -G. -- Glenn Eychaner (geychaner at lco.cl) Telescope Systems Programmer, Las Campanas Observatory
Note I'm cc'ing Glenn, since Nixnet is doing it to me again. Glenn Eychaner wrote:> I manage a set of CentOS operations workstations which are all clones of > each other (3 "live" and 1 "spare" kept powered down); each has a singledrive> with four partitions (/boot, /, /home, swap). I've already set up cron'drsync> jobs to copy the operations accounts between the workstations on a dailybasis,> so that when one fails, it is a simple, quick process to swap in the > spare, restore the accounts from one of the others, and continueoperations. This> has been successfully tested in practice on more than one occasion. > > However, when I perform system updates (about once a month), I like to > create a temporary "clone" of the system to an external drive beforerunning the> update, so that I can simply swap drives or clone back if something goes > horribly wrong. I have been using "CloneZilla" to do this, but it can take > a while since it blanks each partition before copying, and requires asystem> shutdown. > > Question 1: Would it be sufficient to simply use CloneZilla once to > initialize the backup drive (or do it manually, but CloneZilla makes it > easy-peasy), and then use "rsync -aHx --delete" (let me know if I missed > an important rsync option) to update the clone partitions from then on? > I am assuming that the MBR typically doesn't get rewritten during system > updates, though "/etc/grub.conf" obviously does get changed.We use rsync -HPavxz.> > Suppose I want to store more than one workstation on a single drive > (easy), and be able to boot into any of the stored configurations (hard). > Here's what I thought of: > 1) Create a small "master" partition which contains a bootloader > (such as a CentOS rescue disk), and a single "swap" partition. > 2) Create one partition "set" per workstation (/boot, /, /home, excluding > swap). Obviously, these will all likely be logical, and each workstation > must use unique labels for mounting partitions. > 3) On the "master" partition, modify the bootloader menu to allow one to > chainload the /boot partitions for each configuration. (This is the > "Voila!" step that I haven't fully figured out.)How 'bout setting up a pxeboot, with a kickstart file? Or, alternatively, the way we prefer to do upgrades when we can. Using this, you could just get a minimally running system up - say, have that on a spare drive, then do this procedure. If you were careful, you might even do it using a Linux rescue. Anyway, On a running system, mkdir /new /boot/new rsync -HPavzx --exclude=/old --exclude=/var/log/wtmp $machine:/. /new/. rsync -HPavzx $machine:/boot/. /boot/new/. rsync -HPavzx /etc/sysconfig/network-scripts/ifcfg-eth* /new/etc/sysconfig/network-scripts rsync -HPavzx /etc/sysconfig/hwconf /new/etc/sysconfig rsync -HPavzx /boot/grub/device.map /boot/new/grub/ rsync -HPavzx /etc/udev/rules.d/70-persistent-net.rules /new/etc/udev/rules.d/ find /new/var/log/ -type f -exec cp /dev/null {} \; <apache, cluster, other special stuff> rsync -HPavzx /etc/ssh/ssh_host* /new/etc/ssh Then, the rotation: zsh zmodload zsh/files cd /boot mkdir old mv * old mv old/lost+found . mv old/new/* . # Root partition. cd / mkdir old mv * old mv old/lost+found . #mv old/root . -- WHY? mv old/scratch . mv old/new/* . sync sync If the other hardware's different than the copied-from, mount --bind /dev /new/dev mount --bind /sys /new/sys mount --bind /proc /new/proc mount --bind /boot/new /new/boot chroot /new cd /lib/modules VER=$(ls -rt1 | tail -1) echo $VER mkinitrd X $VER mv X /boot/initrd-$VER.img exit umount /new/dev /new/sys /new/proc /new/boot And reboot.
On Fri, Sep 13, 2013 at 3:51 PM, Glenn Eychaner <geychaner at mac.com> wrote:> I manage a set of CentOS operations workstations which are all clones of each > other (3 "live" and 1 "spare" kept powered down); each has a single drive with > four partitions (/boot, /, /home, swap). I've already set up cron'd rsync jobs > to copy the operations accounts between the workstations on a daily basis, > so that when one fails, it is a simple, quick process to swap in the spare, > restore the accounts from one of the others, and continue operations. This has > been successfully tested in practice on more than one occasion.You might want to consider if anything worth saving really needs to be stored on the individual workstations. Could you perhaps mount the home directories from a reliable server or NAS, or more drastically, have one or a few multiuser hosts with most users using a remote X desktop (freenx/NX has pretty good performance). That doesn't really eliminate the need for backups/spares but it changes the scope of things quite a bit.> However, when I perform system updates (about once a month), I like to create > a temporary "clone" of the system to an external drive before running the > update, so that I can simply swap drives or clone back if something goes > horribly wrong. I have been using "CloneZilla" to do this, but it can take a > while since it blanks each partition before copying, and requires a system > shutdown.Look at 'rear' (in the epel repo) as a possible alternative. It will do a tar image backup to an nfs target (with rsync and some other methods as alternatives) and make a bootable iso with a restore script. The big advantage is that you don't have to shut down for the backup and you also have an opportunity to edit the disk layout before the restore if you need it.> Question 1: Would it be sufficient to simply use CloneZilla once to initialize > the backup drive (or do it manually, but CloneZilla makes it easy-peasy), and > then use "rsync -aHx --delete" (let me know if I missed an important rsync > option) to update the clone partitions from then on? I am assuming that the > MBR typically doesn't get rewritten during system updates, though > "/etc/grub.conf" obviously does get changed.I'd expect that to work if the disk is mounted into a different system and not running directly from it. Worst case would be you'd have to boot from a DVD in rescue mode to do a 'grub-install' if it didn't boot.> Question 2: Is there a better way to do the above? How do I perform the > "Voila!" step, i.e. what's the right chainload command for this? Also, the > chainloaded partitions are logical; is this OK?The better way is to not treat the images as magical atomic things (or at least a lot of them) but isolate and back up the user data in a way that it can be dropped into a freshly installed generic machine. You can use some automated tools for kickstart boots etc., but as a starting point think about using the minimal Centos CD followed by 'yum install big_list_of_packages', and then restoring the user data.> I also have a single off-site NAS disk which contains clones of all the > critical workstations on-site. Most of them are Macs, so I can use > sparseimages on the NAS for the clones and get easy-peasy incremental > clones. I also do this for the Linux box (backing it up incrementally to an > HFS case-sensitive sparseimage via rsync), but it's (obviously) a bit of a > kludge. > > Question 3: Is there a UNIX equivalent to the Mac sparseimage that I should be > using for this? ("tar -u" can do it (duh), but then the backup file grows > without bound.)If you can get things down to backing up at the file level instead of full images (or maybe do it besides to keep a history) look at backuppc. It will do the backups over rsync and pool all copies of files with duplicate content whether they are on different machines or previous backups of the same target. It will take the least disk space to keep a fairly long history on-line than anything else and it is pretty much full-auto once you set it up. And you can give machine 'owners' separate logins to its web interface so they can do their own restores, -- Les Mikesell lesmikesell at gmail.com
Reasonably Related Threads
- Changing disk UUID after cloning
- Create CentOS 6 system as "clone" of another - with LVM and different disk sizes
- CentOS LiveCD on USB
- Why is my rsync transfer slow?
- [External] Re: [Marketing Mail] Create CentOS 6 system as "clone" of another - with LVM and different disk sizes