... though I tried, read and typed the last 4 hours; still no clue. Please, can anyone give a clear idea on how this works: Get the content of c0d1s1 to c0d0s7 ? c0d1s1 is pool home and active; c0d0s7 is not active. I have followed the suggestion on http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf % sudo zfs snapshot home at backup % zfs list NAME USED AVAIL REFER MOUNTPOINT home 2.38G 135G 2.37G /export/home home at backup 0 - 2.37G - % zfs backup home at backup > /tmp/backhome unrecognized command ''backup'' I also tried the ''replace'' % zpool replace home c0d1s1 c0d0s7 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c0d0s7 is part of exported or potentially active ZFS pool home. Please see zpool(1M). Then I read zpool and don''t understand what is wrong. A third way also doesn''t work: % zpool export home cannot unmount ''/export/home'': Device busy Another question: When I boot to single user, the ''home'' is not mounted; and then I have no idea how to do that; since the mount command does not accept the slices. Or, in the good old, non-zfs, way I intend to either % mount /dev/dsk/c0d0s7 /mnt % cp -a /export/home/* /mnt/ or - my preferred way - dump -0au -f backup.home /dev/dsk/c0d1s1 reboot, umount /dev/dsk/c0d1s7 and restore. I understand all this is not needed any longer; but the documentation does not seem to cater for the very basics here; also that nice flash-movie does''t address the problem how to replicate a complete file system and shut the first one off (which is why adding to the pool won''t help). Thanks for some hints to a beginner, Uwe This message posted from opensolaris.org
D''Oh! someone needs to update www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf answers below... Uwe Dippel wrote:> ... though I tried, read and typed the last 4 hours; still no clue. > Please, can anyone give a clear idea on how this works: > Get the content of c0d1s1 to c0d0s7 ? > c0d1s1 is pool home and active; c0d0s7 is not active. > > I have followed the suggestion on > http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf > % sudo zfs snapshot home at backup > % zfs list > NAME USED AVAIL REFER MOUNTPOINT > home 2.38G 135G 2.37G /export/home > home at backup 0 - 2.37G - > % zfs backup home at backup > /tmp/backhome > unrecognized command ''backup''About a year ago we changed ''backup'' to ''send'' and ''restore'' to ''receive'' The zfs_demo.pdf needs to be updated.> I also tried the ''replace'' > % zpool replace home c0d1s1 c0d0s7 > invalid vdev specification > use ''-f'' to override the following errors: > /dev/dsk/c0d0s7 is part of exported or potentially active ZFS pool home. Please see zpool(1M). > Then I read zpool and don''t understand what is wrong.What is using c0d0s7? Was is previously exported? If you really don''t want the data on c0d0s7 any more, try using the ''-f'' flag.> A third way also doesn''t work: > % zpool export home > cannot unmount ''/export/home'': Device busyThis is often the case if there is an active process with files open or current working directory in /export/home.> Another question: When I boot to single user, the ''home'' is not mounted; and then I have no idea how to do that; since the mount command does not accept the slices. > > Or, in the good old, non-zfs, way I intend to either > > % mount /dev/dsk/c0d0s7 /mnt > % cp -a /export/home/* /mnt/Use ''zpool import'' instead of mount, except for the case where you are using legacy mount points with ZFS (see the zfs man page for the legacy discussion).> or - my preferred way - > > dump -0au -f backup.home /dev/dsk/c0d1s1 > reboot, umount /dev/dsk/c0d1s7 and restore.What exactly are you trying to accomplish? Often I see this when someone wants a "clean" ufsdump. IMHO, The issues which this procedure solves are more conveniently solved with zfs snapshots.> I understand all this is not needed any longer; but the documentation does not seem to cater for the very basics here; also that nice flash-movie does''t address the problem how to replicate a complete file system and shut the first one off (which is why adding to the pool won''t help).What exactly are you trying to accomplish? Could it be that you are looking for the zfs clone subcommand?> Thanks for some hints to a beginner,These are good questions, we should look to update the FAQ to show some examples of common procedures. -- richard
> Get the content of c0d1s1 to c0d0s7 ? > c0d1s1 is pool home and active; c0d0s7 is not > active. >I have not tried this particular use case, but I think this is a case for "zfs send" and "zfs receive". You''d create a new pool containing only c0d0s7 and do something like this, assuming your original pool was called u01 and you''d put c0d0s7 in u02: root at dev303:/u01/home# zfs snapshot u01/home at backup root at dev303:/u01/home# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 116K 354G 0% ONLINE - u02 354G 111K 354G 0% ONLINE - root at dev303:/u01/home# zfs send u01/home at backup | zfs receive u02/home root at dev303:/u01/home# zfs list NAME USED AVAIL REFER MOUNTPOINT u01 113K 348G 27.5K /u01 u01/home 28.5K 348G 28.5K /u01/home u01/home at backup 0 - 28.5K - u02 146K 348G 26.5K /u02 u02/home 28.5K 348G 28.5K /u02/home u02/home at backup 0 - 28.5K - One caveat here is that I could not find a way to back up the base of the zpool "u01" into the base of zpool "u02". i.e. zfs snapshot u01 at backup zfs send u01 at backup | zfs receive u02 Does not work because "u02" already exists - the receive must be done into a brand new zfs. (It will create the zfs) I suppose you could get around this by creating a new zfs and "mv * ../." from there. PS I think the "zfs backup" functionality was replaced with the "zfs send" - zfs send just writes to stdout so you can pipe it to ssh to send it to another machine, redirect it to a file, etc.> Another question: When I boot to single user, the > ''home'' is not mounted; and then I have no idea how to > do that; since the mount command does not accept the > slices.I can''t get to the console of a system to take it to single user, but you might try "svcadm enable -tr filesystem/local" or "zfs mount -a". -Andy This message posted from opensolaris.org
[i]root at dev303:/u01/home# zfs snapshot u01/home at backup root at dev303:/u01/home# zfs send u01/home at backup | zfs receive u02/home One caveat here is that I could not find a way to back up the base of the zpool "u01" into the base of zpool "u02". i.e. zfs snapshot u01 at backup zfs send u01 at backup | zfs receive u02 Does not work because "u02" already exists - the receive must be done into a brand new zfs. (It will create the zfs) I suppose you could get around this by creating a new zfs and "mv * ../." from there. PS I think the "zfs backup" functionality was replaced with the "zfs send" - zfs send just writes to stdout so you can pipe it to ssh to send it to another machine, redirect it to a file, etc.> Another question: When I boot to single user, the > ''home'' is not mounted; and then I have no idea how to > do that; since the mount command does not accept the > slices.I can''t get to the console of a system to take it to single user, but you might try "svcadm enable -tr filesystem/local" or "zfs mount -a".[/i] Firstly, your last two proposals work okay. I wonder if I should start a new thread for this, but to me, as a ''cool eye'' third party reviewer, ZFS has lost focus very much: What had been intended as a high level file system ''language'' or API, has recently - so it seems to me - retarded in a bunch of low-level respectively atomistic set of commands. The removal of ''backup'' is a good example: backup filesystem1 filesystem2 is a high level approach. Now we / you are back to send / receive. Thirty years ago, dump had exactly the same: dump / restore. Only, that the word ''dump'' has a negative bias. So a word with a negative bias was replaced with a misleading word: ''send''. What progress !!? Look at all the proposals here, to my questions on how to get an identical copy of a filesystem to another partition ! Just read the numbers of lines needed; of non-obvious commands. Also, Richard''s suggestion uses at least the ''wrong'' command: ''clone''. The utility for cloning would be ''dd''. I doubt that zfs actually[b] clones[/b] the drive. I can only urgently suggest to review the work done, and if the desire actually prevails to offer a high-level command set, to revert to high-level commands. backup could be a great asset, as in backup [-f] pool|filesystem pool|filesystem [b]That[/b] would help the admin: backing up a live pool into another pool, respectively another filesystem. Going back to my original and genuine and very common problem, I would [b]expect[/b] a high-level command like this to exist; so that I could type something like backup home /dev/dsk/c0d0s7 A one-liner that everyone understands and might want to type intuitively. With an ''-f'' to force the backup if the file system on c0d0s7 already existed; to grant the overwrite permission. Or backup /dev/dsk/c0d1s1 backup.home Actually, something like: "First, you have to make a snapshot. Then you send this snapshot to a ZFS filesystem that exists. Then, you can receive the file resulting from this action to a non-existing drive". Sorry, that is [b]worse[/b] than dump / restore ! No, I don''t have to newfs the new drive any longer, but if it exists, I have to destroy it before I can receive the snapshot. That''s not much of progress ! [end of rant] And I don''t even dare to attack all those fabulous underpinnings and the huge development effort and progress of the work done. I do dare to question, though, the party who signed off the interface; and its deviation from high-level comprehensive commands to piecemeal atomic can-(and-must)-do everything-and-anything. Uwe This message posted from opensolaris.org
Andy, my excuses, I didn''t really appreciate your input in my earlier mail ! [i]I can''t get to the console of a system to take it to single user, but you might try "svcadm enable -tr filesystem/local" or "zfs mount -a". [/i] Both work properly. Half of the job done; now I have the new home mounted, but inactive. So I can rm -Rf * or similar there; in order to ''cp -a'' the content of the old home to the new home. Still the other half is unresolved: How do I mount the old home which is in no fstab (mnttab), on c0d1s1 ? I can only think of rebooting to the old system, also single user, also ''mount -a''; but then, how to store the files ? In the archives here I read that GNU-tar does not handle all the features ! Which is why I''d like to mount both home-s, old and new, at the same time; hoping that ''cp -a'' will be complete. [i]One caveat here is that I could not find a way to back up the base of the zpool "u01" into the base of zpool "u02". i.e. zfs snapshot u01 at backup zfs send u01 at backup | zfs receive u02 Does not work because "u02" already exists - the receive must be done into a brand new zfs. (It will create the zfs) I suppose you could get around this by creating a new zfs and "mv * ../." from there.[/i] Meaning, I''d have to add another hard disk as temporary storage ? And here, my ''venerable'' problem comes up again: How do I address that additional storage; let''s say /dev/dsk/c2t0d0p3 ? If I don''t give a partition, it might not work or overwrite my data; and currently I have no completely empty, new drive, and there is no Solaris partition on my current USB drive, neither free space to partition it. Or, I could destroy the existing, new-to-be home slice (I don''t need the data) and write directly to there. How would I do that ? Uwe P.S.: Just as a reminder: ''old'': ''/'' c0d1s0 (ufs) home c0d1s1 (zfs) ''new'': ''/'' c0d0s0 (ufs) home c0d0s7 (zfs) Question: How do I make a full copy of old home to new home; without either being active, and none in the fstab / mnttab of the other system ? And no, I don''t want to *extend* the home to old and new. Just copy, and then fdisk c0d1. This message posted from opensolaris.org
> Both work properly. Half of the job done; now I have > the new home mounted, but inactive. So I can rm -Rf * > or similar there; in order to ''cp -a'' the content of > the old home to the new home. > Still the other half is unresolved: How do I mount > the old home which is in no fstab (mnttab), on c0d1s1 > ?I guess I am unclear on what you are trying to do. As far as I can tell, you have two solaris root partitions, and two zpools, one of which is associated with each of the two solaris root partitions? I assume both of those two disks are in the system at the same time? Or do you have one single zpool called "home" with two mirrored disks in it? If you type "zpool status" what do you get? You might be getting into trouble because both pools are called "home"? One of the devs here might be able to help you with this particular instance. (I am not a dev, just a hapless sysadmin) You may be able to force one to mount elsewhere by using the "zfs set mountpoint=/somewhere_else home" but again, I don''t know how you''d tell it which "home" zpool you are talking about.> I can only think of rebooting to the old system, also > single user, also ''mount -a''; but then, how to store > the files ? In the archives here I read that GNU-tar > does not handle all the features ! Which is why I''d > like to mount both home-s, old and new, at the same > time; hoping that ''cp -a'' will be complete.Again, "zfs send home at snapshot" just sends to standard output. You could do: "zfs send home at snapshot > /some/other/fs/home1.backup". This backup can then be restored with the "zfs receive" command.> [i] > zfs snapshot u01 at backup > zfs send u01 at backup | zfs receive u02 > > Does not work because "u02" already exists - the > receive must be done into a brand new zfs. (It will > create the zfs) I suppose you could get around this > by creating a new zfs and "mv * ../." from > there.[/i] > > Meaning, I''d have to add another hard disk as > temporary storage ?I guess the confusion here is between zpools and zfs filesystems. A zpool is a collection of devices, or possibly only one device. By default, a zfs filesystem is created on top of the zpool with the same name as the zpool. ZFSs are heirarchical, they are made to be created one filesystem within another. Here are two zpools, each with one disk, note that each of them has the default ZFS on top of it. root at dev303:/u04# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 82K 354G 0% ONLINE - u02 354G 117K 354G 0% ONLINE - root at dev303:/u04# zfs list NAME USED AVAIL REFER MOUNTPOINT u01 79K 348G 26.5K /u01 u02 114K 348G 24.5K /u04 I can create a new ZFS with a single command, which allocates space from the zpool "u01" (Beacuse the filesystem name is u01/home) root at dev303:/u04# zfs create u01/home root at dev303:/u04# zfs list NAME USED AVAIL REFER MOUNTPOINT u01 110K 348G 26.5K /u01 u01/home 24.5K 348G 24.5K /u01/home u02 114K 348G 24.5K /u02> And here, my ''venerable'' problem comes up again: How > do I address that additional storage; let''s say > /dev/dsk/c2t0d0p3 ? If I don''t give a partition, it > might not work or overwrite my data; and currently I > have no completely empty, new drive, and there is no > Solaris partition on my current USB drive, neither > free space to partition it.You first add your storage to a zpool, which then gets a zfs created on top of it automatically, and by default is mounted in /zpool_name. root at dev303:/# zpool create -f new_zpool c0t2d0s6 root at dev303:/# mount |grep new_zpool /new_zpool on new_zpool read/write/setuid/devices/exec/atime/dev=2d9001c on Sun Feb 11 23:29:12 2007 root at dev303:/# zfs list |grep new_zpool new_zpool 77K 348G 24.5K /new_zpool> Or, I could destroy the existing, new-to-be home > slice (I don''t need the data) and write directly to > there. How would I do that ?zpool create newhome c0d0s7 zfs snapshot home at backup zfs send home at backup | zfs receive newhome/home> Question: How do I make a full copy of old home to > new home; without either being active, and none in > the fstab / mnttab of the other system ? > And no, I don''t want to *extend* the home to old and > new. Just copy, and then fdisk c0d1.Well, both will have to be ''active'' (i.e. mounted on the same machine) if you want the zfs send/zfs receive to work. If you''ve created the disk on another machine and physically moved the drive over, you might have to use the ''zfs import'' command but I''m not sure exactly what the circiumstances are where you have to use that. -A This message posted from opensolaris.org
Uwe Dippel wrote:> On 2/11/07, Richard Elling <Richard.Elling at sun.com> wrote: >> D''Oh! someone needs to update >> www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf >> answers below... > >> About a year ago we changed ''backup'' to ''send'' and ''restore'' to ''receive'' >> The zfs_demo.pdf needs to be updated. > > Oh, yes, then, please !Cindy has found the source document and is bringing it up to date. Thanks Cindy!>> What is using c0d0s7? Was is previously exported? If you really don''t >> want the data on c0d0s7 any more, try using the ''-f'' flag. > >> > A third way also doesn''t work: >> > % zpool export home >> > cannot unmount ''/export/home'': Device busy >> >> This is often the case if there is an active process with files open or >> current working directory in /export/home. > > Also, this might find its way into the demo / document ...IIRC, it was added to the sun-managers FAQ sometime around 1990. It is not ZFS-specific. Eventually a ''-f'' flag was added to umount(1m) That option also exists for "zfs unmount">> What exactly are you trying to accomplish? > > Quite straightforward: I have an install on c0d1 and want to transfer > that install to c0d0. As sysadmin, I need to do that frequently. On > c0d1, s1 is ''home''; on c0d0 it will be s7. Different size, so ''dd'' is > out. Usually (BSD and Linux), ''dump'' works extremely well for me, to > create a dump-file from a *mounted* file-system; which needs ''restore'' > (or ''|'') for the other partition.tar, cpio, rsync, rdist, cp, pax, zfs send/receive,... take your pick.>> Could it be that you are looking for the zfs clone subcommand? > > I''ll have to look into it ! > >> These are good questions, we should look to update the FAQ to show >> some examples of common procedures. > > Yes, please ! - ZFS seems to be so rich in features and so versatile. > If you guys are not careful, though, you are moving too fast for > newcomers. And then, what is ''sooo obvious'' for you as developers, > might simply scare off others; who have no slightest clue how to even > *start* ! - One of my largest hurdles was and is the lack of a ''mount > /dev/dsk/cndmpx /mnt/helper''. *You* don''t need it, but I still have no > clue, how to read a file on an unmounted slice on the other drive !!: > I am now on c0d0, everything quite okay, but I need a file from my > ''old'' home on c0d1s1. See, for you this is obvious, for me, after > hours of reading, not. So I need to boot to the other drive, copy the > file to ''/'' (ufs), reboot to c0d0, mount ufs on c0d1 and read that > file !! You will laugh about this, but your examples are simply all > ''too high'' and there are too many commands for me to know how to mount > an inactive slice *without creating a mirror, clone, slave, backup > ..*; just to *read* a simple file and umount safely again! :)I''m not sure why you would need to "boot to the other drive" when you could just mount it?> Thanks for listening; and don''t forget us beginners ! In the end, you > will need people to migrate to ZFS, and then it would be good to have > a ''cheat sheet''; a side-by-side comparison of ''classical'' file system > tasks and commands with those used for ZFS.I think this is a good idea if we could keep it at the procedural level and not get into the "this option flag == that option flag" Perhaps we should start another thread on this. -- richard
comment below... Uwe Dippel wrote:> Dear Richard, > > > > Could it be that you are looking for the zfs clone subcommand? > > > > I''ll have to look into it ! > > I *did* look into it. > man zfs, /clone. This is what I read: > > Clones > A clone is a writable volume or file system whose initial > contents are the same as another dataset. As with snapshots, creating a > clone is nearly instantaneous, and > initially consumes no additional space. > > Clones can only be created from a snapshot. When a snapshot is > cloned, it creates an implicit dependency between the parent and child. > Even though the clone is created > somewhere else in the dataset hierarchy, the original snapshot > cannot be destroyed as long as a clone exists. The "origin" property > exposes this dependency, and the > destroy command lists any such dependencies, if they exist. > > The clone parent-child dependency relationship can be reversed > by using the "promote" subcommand. This causes the "origin" file system > to become a clone of the speci- > fied file system, which makes it possible to destroy the file > system that the clone was created from. > ... > zfs clone snapshot filesystem|volume > > Creates a clone of the given snapshot. See the "Clones" > section for details. The target dataset can be located anywhere in the > ZFS hierarchy, and is created as the > same type as the original. > ... > Example 9 Creating a ZFS Clone > > The following command creates a writable file system whose > initial contents are the same as " pool/home/bob at yesterday". > > # zfs clone pool/home/bob at yesterday pool/clone > > Richard, I can read and usually understand Shakespeare, though my mother > tongue is not English. And I''ve been in computers for 25 years, but this > is definitively above my head.Yeah, I know what you mean. And I don''t think that you wanted to clone when a simple copy would suffice. In order to understand clones, you need to understand snapshots. In my mind a clone is a writable snapshot, similar to a fork in source code management. This is not what you currently need.> The latter comes closest to be understood, but does not address my > persistent problem of me having slices on other disks; not a new pool > within my file system.zpools are composed of devices. zfs file systems are created in zpools. Historically, a file system was created on one device and there was only one file system per device. If you don''t understand this simple change, then the rest gets very confusing.> To me it currently looks like a ''dead'' invention; like so many so great > ideas in the history of mankind. > Serious, I saw the flash presentation, knew ZFS is *the* filesystem for > at least as long as I live ! > On the other hand, it needs a ''handle''; it needs to solve legacy > problems. To me, the worst decision taken until here, is, that we cannot > associate an arbitrary disk partition or slice - though formatted as ZFS > - readily with a mount point in our systems; do something that we > control; and relinquish the association.See previous point.> In order to be accepted on a breadth, IMHO a new filesystem - as much as > it shines - can only succeed if it offers a transition from what we > system admins have been doing all along, and adds all those phantastic > items. > Look, I was kind of feeling bad and stupid for my initial post. Because > I''d myself answer RTFM if someone asked this in a list for BSD or Linux. > And the desire is so straightforward: > - replicating an existing, ''live'' file system on another drive, any > other drivetar, cpio, rsync, rdist, cp, pax, zfs send/receive,... take your pick.> - associate (mount) any slice from an arbitrary other drive to a branch > in my file systemPerhaps you are getting confused over the default mount point for ZFS file systems? You can set a specific mount point for each ZFS file system as a "mountpoint" property. There is an example of this in the zfs(1m) man page: EXAMPLES Example 1 Creating a ZFS File System Hierarchy The following commands create a file system named "pool/home" and a file system named "pool/home/bob". The mount point "/export/home" is set for the parent file sys- tem, and automatically inherited by the child file system. # zfs create pool/home # zfs set mountpoint=/export/home pool/home # zfs create pool/home/bob What you end up with in this example is: ZFS file system "pool/home" mounted as "/export/home" (rather than the default "/pool/home") ZFS file system "pool/home/bob" mounted as "/export/home/bob" IMHO, this isn''t clear from the example :-( -- richard
[i] zpool create newhome c0d0s7 zfs snapshot home at backup zfs send home at backup | zfs receive newhome/home A 1:1 copy of the zfs "home" should then exist in "/newhome/home". [/i] ''should'' was the right word. It doesn''t; and has actually destroyed my poor chances to mount it. I hope someone can help me !!? This is what I did: % zpool create -f newhome c0d0s7 [There were some data of earlier experiments] % zfs snapshot home at backup % zfs list: home 2.38G 135G 2.37G /export/home home at backup 0 - 2.37 - newhome 85K 27.6G 24.5k /newhome % zfs send home at backup | zfs receive newhome/home [quite some activity, then:] home 2.38G 135G 2.37G /export/home home at backup 0 - 2.37 - newhome 2.62G 24.9G 25.5k /newhome newhome/home 2.62G 24.9G 2.62G /newhome/home So far so good. But after reboot to c0d0, there is no home directory any more found: "Your home is listed as /export/home/udippel but it does not appear to exist. ....">From a terminal, I can get the partition though:c0d0s7 is 7 home 28.11 GB But whatever mount command I give, it will return ''invalid dataset name''; inclusive ''mount -a''. Though home is listed in mnttab, and I didn''t touch [b]anything[/b]. Just boot and reboot plus the commands as above. [I just call this all a great crap. It ought to ''backup mountpoint drive''; but also your suggestion looked kind of logic.] And in case anyone asked, I can confirm that c0d0s7 did mount properly before. Please, anyone help me to mount the /home again, without reinstalling cod0 from scratch [the only thing I could think of with my very limited understanding] !! Uwe This message posted from opensolaris.org
Uuh, I just found out that I now have the new data ... whatever, here it is: [I did have to boot to the old system, since the new install lost its new ''home''] [i]zpool status pool: home state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM home ONLINE 0 0 0 c0d1s1 ONLINE 0 0 0 errors: No known data errors pool: newhome state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newhome ONLINE 0 0 0 c0d0s7 ONLINE 0 0 0 errors: No known data errors udippel at nex:~$ df -h Filesystem size used avail capacity Mounted on /dev/dsk/c0d1s0 7.9G 6.8G 1.0G 88% / /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 1.2G 560K 1.2G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /usr/lib/libc/libc_hwcap1.so.1 7.9G 6.8G 1.0G 88% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 1.2G 8K 1.2G 1% /tmp swap 1.2G 152K 1.2G 1% /var/run home 138G 2.4G 135G 2% /export/home newhome 28G 25K 25G 1% /newhome newhome/home 28G 2.6G 25G 10% /newhome/home [/i] Very much unexpected; as far as I can see ! Instead of getting the data into the location of the new install, it has removed the drive c0d0s7 as ''home'' from that new install and added it to my old install. Now I can take a guess what happened with your commands, Andrew! I issued them from the old install, and instead of just transferring the data to the ''home'' drive of the new install, it simply associated it with the OS that was running, the old one. Also this is very unexpected for us, the dino system admins; since we don''t expect to see a difference between copying files from A to B running A; or copying files from A to B running B. In any case, the files (and mountpoints) are expected to be the same and unchanged. Now, so my humble guess, I need to know the commands to be run in the new install to de-associate c0d0s7 from the old install and re-associate this drive with the new install. All this probably happened through the ''-f'' in ''zpool create -f newhome c0d0s7''; which seemingly takes precedence in comparison with the earlier mount point association. Makes some sense. But still, then we would need just another option more that permits to overwrite the data without changing the association. What do I do now ? Logically, booting to the other, new, system won''t help; since doing the same from there would do just vice versa and associate the old home with the new install and the new home. Uwe This message posted from opensolaris.org
> Now, so my humble guess, I need to know the commands > to be run in the new install to de-associate c0d0s7 > from the old install and re-associate this drive with > the new install. > All this probably happened through the ''-f'' in ''zpool > create -f newhome c0d0s7''; which seemingly takes > precedence in comparison with the earlier mount point > association. Makes some sense. But still, then we > would need just another option more that permits to > overwrite the data without changing the association. > > What do I do now ? Logically, booting to the other, new, > system won''t help; since doing the same from there would > do just vice versa and associate the old home with the > new install and the new home.Yep, that''s exactly what happened. Zpools have a concept of ownership, they know about the last system that had them mounted. This is so that in a shared storage environment, such as a SAN, iSCSI, etc, more than one host does not control a volume at the same time - that would be disasterous. The right way to manage the associations is with the ''zpool import'' (And the matching ''zpool export'') command. From your "new" system, if you type "zpool import", it should give you a list of zpools you can import. I suspect that you will see two volumes there, "home" and "newhome". "zpool import" just shows you the list of zpools you can import without actually importing them. Here''s what it looks like on my system: root at dev303:~# zpool import pool: new_zpool id: 3042040702885268372 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: new_zpool ONLINE c0t2d0s6 ONLINE It shows that there is one filesystem available for import on one of my disks. Here is a list of what zpools I have associated now: root at dev303:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 254K 354G 0% ONLINE - u02 354G 150K 354G 0% ONLINE - Now I run the import command. Note, I can even rename the pool when I import it. so, for example, you could import your "newhome" volume as "home". Here I will import the "new_zpool" as "zpool". root at dev303:~# zpool import new_zpool zpool root at dev303:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 254K 354G 0% ONLINE - u02 354G 150K 354G 0% ONLINE - zpool 354G 500K 354G 0% ONLINE - I just had to learn about zpool import yesterday since I''m scripting an automated install of Nexenta for some of our servers - I create the default storage pool during the install, but then when it reboots, the hostname/hostid has changed so I need to re-associate the pool. I know you''re frustrated with this stuff, but once you''ve figured it out it really is very powerful. :-) -Andy This message posted from opensolaris.org
Uwe Dippel wrote:> [i]root at dev303:/u01/home# zfs snapshot u01/home at backup > root at dev303:/u01/home# zfs send u01/home at backup | zfs receive > u02/home > > One caveat here is that I could not find a way to back up the base of > the zpool "u01" into the base of zpool "u02". i.e. > > zfs snapshot u01 at backup zfs send u01 at backup | zfs receive u02 > > Does not work because "u02" already exists - the receive must be done > into a brand new zfs. (It will create the zfs)FYI, this is bug 6358519.> PS I think the "zfs backup" functionality was replaced with the "zfs > send" - zfs send just writes to stdout so you can pipe it to ssh to > send it to another machine, redirect it to a file, etc.''zfs send'' is simply the new name for ''zfs backup''. It more clearly expresses the intent -- to send your fs to another pool. This can be used for backups too, but it is not a complete backup solution.> I wonder if I should start a new thread for this, but to me, as a > ''cool eye'' third party reviewer, ZFS has lost focus very much: What > had been intended as a high level file system ''language'' or API, has > recently - so it seems to me - retarded in a bunch of low-level > respectively atomistic set of commands.Are there any other examples? backup->send is not relevant here (see above / below).> The removal of ''backup'' is a > good example: backup filesystem1 filesystem2 is a high level > approach. Now we / you are back to send / receive. Thirty years ago, > dump had exactly the same: dump / restore. Only, that the word ''dump'' > has a negative bias. So a word with a negative bias was replaced with > a misleading word: ''send''. What progress !!?As mentioned above, this was a simple name change, intended to make it *more* clear what the intended use and functionality is. Calling it ''backup'' is misleading because it is not a complete backup solution (eg, doesn''t handle tape drives, restoring individual files, managing multiple backups, etc).> Look at all the proposals here, to my questions on how to get an > identical copy of a filesystem to another partition !Did you read the zfs(1m) manpage, including the following example? Example 12 Remotely Replicating ZFS Data The following commands send a full stream and then an incre- mental stream to a remote machine, restoring them into "poolB/received/fs at a" and "poolB/received/fs at b", respec- tively. "poolB" must contain the file system "poolB/received", and must not initially contain "poolB/received/fs". # zfs send pool/fs at a | \ ssh host zfs receive poolB/received/fs at a # zfs send -i a pool/fs at b | ssh host \ zfs receive poolB/received/fs> I can only urgently suggest to review the work done, and if the > desire actually prevails to offer a high-level command set, to revert > to high-level commands. backup could be a great asset, as in backup > [-f] pool|filesystem pool|filesystem [b]That[/b] would help the > admin: backing up a live pool into another pool, respectively another > filesystem.I''m not sure that this is a typical "backup" scenerio. That said, this will be very easy to do once 6421958 "want recursive zfs send (''zfs send -r'')" is integrated.> Actually, something like: "First, you have to make a snapshot. Then > you send this snapshot to a ZFS filesystem that exists. Then, you can > receive the file resulting from this action to a non-existing drive". > Sorry, that is [b]worse[/b] than dump / restore ! No, I don''t have to > newfs the new drive any longer, but if it exists, I have to destroy > it before I can receive the snapshot. That''s not much of progress !In order to support a more powerful and flexible model, sometimes old concepts (eg. volume management) must be replaced with new concepts (eg. pooled storage). As I mentioned, we are working on making this easier to use. Your use case makes a number of assumptions that we didn''t want to make for the general case (eg. that the stream will be sent to and stored on a raw device on the same machine as the zpool). However, our infrastructure allows us to provide the type of simple, limited-use functionality you are requesting. That said, we have limited resources and we need to evaluate what will be most useful to the most customers. We must learn to walk before we can run.> And I don''t even dare to attack all those fabulous underpinnings and > the huge development effort and progress of the work done. I do dare > to question, though, the party who signed off the interface; and its > deviation from high-level comprehensive commands to piecemeal atomic > can-(and-must)-do everything-and-anything.You have mentioned one example, which I feel you have simply misunderstood. If there are any others, we''d be happy to hear them. --matt
[i]root at dev303:~# zpool import pool: new_zpool id: 3042040702885268372 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: new_zpool ONLINE c0t2d0s6 ONLINE It shows that there is one filesystem available for import on one of my disks. Here is a list of what zpools I have associated now: root at dev303:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 254K 354G 0% ONLINE - u02 354G 150K 354G 0% ONLINE - Now I run the import command. Note, I can even rename the pool when I import it. so, for example, you could import your "newhome" volume as "home". Here I will import the "new_zpool" as "zpool". root at dev303:~# zpool import new_zpool zpool root at dev303:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT u01 354G 254K 354G 0% ONLINE - u02 354G 150K 354G 0% ONLINE - zpool 354G 500K 354G 0% ONLINE - [/i] Not there yet, though. It works pretty much like you say, thanks. I had to % zpool export home before I could successfully ''import'', since there was a faulty home - - - - FAULTED - probably from my last start without the home. In the end, after % zpool import newhome home it mounts, but not properly, and I get home home/home as file systems. And after an ''exit'', it still complains about the lack of /export/home/udippel. % zpool import and % zpool list are as expected, though. How can I now teach it to come up properly as /export/home/udippel, please ? Uwe This message posted from opensolaris.org
[i]I create the default storage pool during the install, but then when it reboots, the hostname/hostid has changed so I need to re-associate the pool. I know you''re frustrated with this stuff, but once you''ve figured it out it really is very powerful. :-)[/i] If you read my contributions, I have no doubt about that. On the contrary. I do doubt ready acceptance, due to a lack of consistency and backward compatibility. So, here is what I have been doing; some parts: % zfs mount -a does not mount home But home exists: % zpool list home 28G 2.62G 25.4G 9% ONLINE - This display is always the same, irrespective of mounted or unmounted. IMHO: not good. It does not show the mount point neither. IMHO: not good. df -h does not show it as mounted. So it is not. ls -l /export/home confirms this fact. % zfs mount home /export/home is very logical to me, not to zfs: too many arguments. IMHO: not good % zfs mount /export/home invalid dataset The problem is obvious: I have the data, but no clue how to glue it to my file system tree. Legacy logic and syntax won''t work. man brings another idea: mountpoint% zfs set mountpoint=/export/ home /export/: directory not empty. Right-o. % zfs set mountpoint=/export/home home *does* mount; but my ''exit'' keeps me at single user: mount -a fails. Try again: % mount -a failed ... mountpoint or dataset busy I wonder if I am damn stupid from one day to another or simply buried too deep into legacy file systems; but I never - as long as I can remember -had that much problem to attach an existing file system to a tree ! There must be someone in SUN who [b]hates[/b] drives ! There are never drives you can address; up to a high level of sillyness: As you know I happen to have two home directories, on c0d0s7 and on c0d1s1. Now whenever I typed % zpool import " .... can be imported using name or id" With duplicate name, ''home'' will fail. But also, c0d0s7 fails miserably. Instead you have to type 19 digits of an arbitrary random number. The most normal thing would be, to type the location, written just next to the name ! This message posted from opensolaris.org
Since nobody seems to have a clue and I didn''t want to give up - neither install from scratch - , I kept playing. Suddenly everything was back in place, after I hit by intuition % zfs set mountpoint=legacy home It beats me, why and how this brought back the desired state, since I had issued % zfs set mountpoint=/export/home home % zfs set mountpoint=/export/home home/home before, seemingly without success at ''mount -a''; and mnttab contains home/home /export/home zfs Thanks to everyone who tried to help out and answer my original question; especially to Andrew, who contributed almost everything to replicate and import the pool ! Uwe This message posted from opensolaris.org
Uwe Dippel wrote:> Since nobody seems to have a clue and I didn''t want to give up - neither install from scratch - , I kept playing. Suddenly everything was back in place, after I hit by intuition > % zfs set mountpoint=legacy homeIt wasn''t clear to me that you wanted a legacy mount, most people find them to be more work than worthwhile. I suppose your reference to doing a "mount" should have been a tip, we don''t need to do that with ZFS, by default. I don''t think I know anyone who uses legacy mounts with ZFS... Cindy, perhaps we should spend some words on this for people who are more comfortable with vfstab. -- richard
Cindy.Swearingen at Sun.COM
2007-Feb-20 15:50 UTC
[zfs-discuss] Re: How to backup a slice ? - newbie
Uwe, It was also unclear to me that legacy mounts were causing your troubles. The ZFS Admin Guide describes ZFS mounts and legacy mounts, here: http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view Richard, I think we need some more basic troubleshooting info, such as this mount failure. I''ll add some add''l troubleshooting scenarios to the ZFS Admin guide. Cindy Richard Elling wrote:> Uwe Dippel wrote: > >> Since nobody seems to have a clue and I didn''t want to give up - >> neither install from scratch - , I kept playing. Suddenly everything >> was back in place, after I hit by intuition >> % zfs set mountpoint=legacy home > > > It wasn''t clear to me that you wanted a legacy mount, most people > find them to be more work than worthwhile. I suppose your reference > to doing a "mount" should have been a tip, we don''t need to do that > with ZFS, by default. I don''t think I know anyone who uses legacy > mounts with ZFS... > > Cindy, perhaps we should spend some words on this for people who > are more comfortable with vfstab. > -- richard > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss