I am building a mailserver and with all the steps, I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point. The image is currently only 4GB on a 120GB drive. Fdisk reports: Disk /dev/sdb: 111.8 GiB, 120034124288 bytes, 234441649 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000c89d Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 1026047 1024000 500M 83 Linux /dev/sdb2 1026048 2074623 1048576 512M 82 Linux swap / Solaris /dev/sdb3 2074624 6268927 4194304 2G 83 Linux and parted: Model: Kingston SNA-DC/U (scsi) Disk /dev/sdb: 120GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext3 2 525MB 1062MB 537MB primary linux-swap(v1) 3 1062MB 3210MB 2147MB primary ext4 what dd params work? dd if=/dev/sdb of=os.img bs=1M count=3210 ? thanks
On Mar 2, 2017, at 6:36 PM, Robert Moskowitz <rgm at htt-consult.com> wrote:> > I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point? > what dd params work? > > dd if=/dev/sdb of=os.img bs=1M count=3210That looks plausible. (I haven?t verified your count parameter exactly.) However, I wonder why you?re trying to reinvent snapshots, a technology now built into several advanced filesystems, such as btrfs and ZFS? https://en.wikipedia.org/wiki/Btrfs#Subvolumes_and_snapshots btrfs is built into CentOS 7. While there have been some highly-publicized bugs in btrfs, they only affect the RAID-5/6 features. You don?t need that here, so you should be fine with btrfs. And if you really distrust btrfs, ZFS is easy enough to integrate into CentOS on-site. And if *that* is also out of the question, you have LVM2 snapshots: http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html Why reinvent the wheel?
On Mar 2, 2017, at 6:53 PM, Warren Young <warren at etr-usa.com> wrote:> > Why reinvent the wheel?Oh, I forgot to say, LVM2, ZFS, and btrfs snapshots don?t image the *entire* drive including slack space. They set a copy-on-write point which is near-instantaneous, so that whenever one of the current data blocks changes, its content gets copied to a new space on the disk and modified there, so that rolling back amounts to moving a bunch of pointers around, not downing the whole machine and wiping out your prior setup, including all that mail you?ve accumulated in the meantime. If you?re after some unstated goal, such as off-machine backups, there?s generally a way to send a copy of the snapshot to another machine, such as via SSH. This is also more efficient than copying a raw dd image. Not only does it skip over slack space, you can send the snapshot to another similar machine and ?play back? the snapshot there, effectively mirroring the machine, taking only as much time as needed to transmit the *changes* since the last snapshot. If you?ve use a virtual machine manager with snapshotting features, these filesystems? features are a lot like that. Quick, efficient, and quite robust.
On 03/02/2017 08:53 PM, Warren Young wrote:> On Mar 2, 2017, at 6:36 PM, Robert Moskowitz <rgm at htt-consult.com> wrote: >> I want to image the drive at various 'checkpoints' so I can go back and redo from a particular point? >> what dd params work? >> >> dd if=/dev/sdb of=os.img bs=1M count=3210 > That looks plausible. (I haven?t verified your count parameter exactly.) > > However, I wonder why you?re trying to reinvent snapshots, a technology now built into several advanced filesystems, such as btrfs and ZFS? > > https://en.wikipedia.org/wiki/Btrfs#Subvolumes_and_snapshots > > btrfs is built into CentOS 7. While there have been some highly-publicized bugs in btrfs, they only affect the RAID-5/6 features. You don?t need that here, so you should be fine with btrfs. > > And if you really distrust btrfs, ZFS is easy enough to integrate into CentOS on-site. > > And if *that* is also out of the question, you have LVM2 snapshots: > > http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html > > Why reinvent the wheel?This is Centos7-armv7. Not all the tools are there. I keep getting surprises in some rpm not in the repo, but if I dig I will find it (but php-imap is NOT built yet and that I need). The base image is a dd, and you start with something like: xzcat CentOS-Userland-7-armv7hl-Minimal-1611-CubieTruck.img.xz | sudo dd of=/dev/sdb bs=4M; sync btw, this reports: 0+354250 records in 0+354250 records out 3221225472 bytes (3.2 GB, 3.0 GiB) copied, 120.656 s, 26.7 MB/s Then you boot up (connected via the JART with a USB/TTL for a serial console). I want a drive image, and that is easy to do. I disconnect my drive from the CubieTruck, stick it into a USB/sata adapter, and I can image the whole drive. For just a development and snapshotting project, dd (and xzcat) do the job, and really what the Fedora-arm and Centos-arm teams have been doing. I actually did this 2+ years creating Redsleeve6 images, but can't find any of my notes. :(
On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz <rgm at htt-consult.com> wrote:> dd if=/dev/sdb of=os.img bs=1M count=3210 >I would recommend bs=512 to keep the block sizes the same though not a huge diff just seems to be happier for some reason and add status=progress if you would like to monitor how it is doing. Seems the command you have should work otherwise.
On 03/02/2017 09:06 PM, fred roller wrote:> On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz <rgm at htt-consult.com> > wrote: > >> dd if=/dev/sdb of=os.img bs=1M count=3210 >> > I would recommend bs=512 to keep the block sizes the same though not a huge > diff just seems to be happier for some reason and add status=progress if > you would like to monitor how it is doing. Seems the command you have > should work otherwise.So, given the fdisk output, Disk /dev/sdb: 111.8 GiB, 120034124288 bytes, 234441649 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000c89d Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 1026047 1024000 500M 83 Linux /dev/sdb2 1026048 2074623 1048576 512M 82 Linux swap / Solaris /dev/sdb3 2074624 6268927 4194304 2G 83 Linux would count=6268927 ? oh, and this way, I can lay the image down on any drive. Even a mSD card (as that is the actual boot device, but the Cubie uboot (and linksprite) can run almost completely from a sata drive). thanks
On Thu, Mar 02, 2017 at 09:06:52PM -0500, fred roller wrote:> On Thu, Mar 2, 2017 at 8:36 PM, Robert Moskowitz <rgm at htt-consult.com> > wrote: > > > dd if=/dev/sdb of=os.img bs=1M count=3210 > > > > I would recommend bs=512 to keep the block sizes the same though not a huge > diff just seems to be happier for some reason and add status=progress if > you would like to monitor how it is doing. Seems the command you have > should work otherwise.The dd blocksize has nothing to do with the disk sector size. the disk sector size is the number of bytes in a minimal read/write operation (because the physical drive can't manipulate anything smaller). the dd blocksize is merely the number of bytes read/written in a single read/write operation. (or not bytes, but K, or Kb, or other depending on the options you use.) It makes sense for the bs option in dd to be a multiple of the actual disk block/sector size, but isn't even required. if you did dd with a block size of, e.g., 27, it would still work, it'd just be stupidly slow. Fred -- ------------------------------------------------------------------------------- Under no circumstances will I ever purchase anything offered to me as the result of an unsolicited e-mail message. Nor will I forward chain letters, petitions, mass mailings, or virus warnings to large numbers of others. This is my contribution to the survival of the online community. --Roger Ebert, December, 1996 ----------------------------- The Boulder Pledge -----------------------------
On 03/02/2017 11:57 PM, Robert Moskowitz wrote:> The following worked: > > # dd if=/dev/sdb of=cubietruck.img bs=512 count=6268927 > > 6268927+0 records in > 6268927+0 records out > 3209690624 bytes (3.2 GB, 3.0 GiB) copied, 114.435 s, 28.0 MB/s > > So bs= IS the drive blocksize. > > This is the result of trying a number of different values for bs and > count.You can set bs to a multiple of 512 and it will go a lot faster. If I have to use raw dd for cloning, I will factor the count all the way down to primes, and multiply the blocksize by all of the factors up to the largest prime factor. This is trivially easy on a CentOS system (factor is part of coreutils): [lowen at FREE-IP-92 ~]$ factor 6268927 6268927: 7 43 59 353 So you could use 512 times any of these factors, or several of these factors. I would probably use the line: dd if=/dev/sdb of=cubietruck.img bs=9092608 count=353 Note that while dd can use the abbreviation 'k' you would not want to use that here since 2 is not one of the factors of your count. A roughly 9MB blocksize is going to be loads faster than 512, but still manageable. Or you could make it easy on yourself and use either dd_rescue or ddrescue. When I was working on the ODROID C2 stuff last year I built ddrescue from source RPM early on, before it got built as part of the EPEL aarch64 stuff. Either of these two will figure out the optimum blocksize for you for best performance, and you get progress indications without having to have another terminal open to issue the fun 'kill -USR1 $pid-of-dd' command to get that out of dd. The ddrescue utility for one includes a '--size=<bytes>' parameter so that you can clone only the portion you want.
On Mar 3, 2017, at 10:49 AM, Lamar Owen <lowen at pari.edu> wrote:> > On 03/02/2017 11:57 PM, Robert Moskowitz wrote: >> The following worked: >> >> # dd if=/dev/sdb of=cubietruck.img bs=512 count=6268927 >> >> 6268927+0 records in >> 6268927+0 records out >> 3209690624 bytes (3.2 GB, 3.0 GiB) copied, 114.435 s, 28.0 MB/s >> >> So bs= IS the drive blocksize. >> >> This is the result of trying a number of different values for bs and count. > > You can set bs to a multiple of 512 and it will go a lot faster.Maybe, maybe not. OP said he?s on an embedded system, which often implies low-end eMMC or SD type storage, and 28 MB/sec is typical for such things. When mirroring HDDs and proper SSDs, yes, you want to use large block sizes.