I have a snv_52 server that I want to upgrade to the latest, either via a non-debug build or a simple fresh install. I don''t know which yet as I have not decided. I have a pile of disks hanging off it on two controllers, c0 and c1. The disks on c1 are in a zpool thus : bash-3.1$ zpool status pool: zfs0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs0 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 c1t12d0 ONLINE 0 0 0 c1t13d0 ONLINE 0 0 0 c1t14d0 ONLINE 0 0 0 errors: No known data errors bash-3.1$ zpool iostat -v zfs0 15 4 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- zfs0 182G 20.7G 0 19 14.7K 1.65M c1t9d0 30.3G 3.46G 0 3 2.44K 281K c1t10d0 30.3G 3.46G 0 3 2.47K 281K c1t11d0 30.3G 3.46G 0 3 2.43K 281K c1t12d0 30.3G 3.46G 0 3 2.45K 280K c1t13d0 30.3G 3.46G 0 3 2.49K 281K c1t14d0 30.3G 3.46G 0 3 2.43K 281K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- zfs0 182G 20.6G 0 97 63.9K 10.4M c1t9d0 30.3G 3.44G 0 13 0 1.71M c1t10d0 30.3G 3.44G 0 13 8.52K 1.71M c1t11d0 30.3G 3.44G 0 14 8.52K 1.73M c1t12d0 30.3G 3.44G 0 19 12.8K 1.74M c1t13d0 30.3G 3.44G 0 19 25.6K 1.74M c1t14d0 30.3G 3.44G 0 16 8.52K 1.74M ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- zfs0 182G 20.4G 1 120 85.5K 11.7M c1t9d0 30.3G 3.41G 0 17 13.0K 1.95M c1t10d0 30.3G 3.41G 0 18 21.3K 1.97M c1t11d0 30.3G 3.41G 0 23 21.3K 1.96M c1t12d0 30.3G 3.41G 0 21 12.8K 1.95M c1t13d0 30.3G 3.41G 0 21 12.8K 1.97M c1t14d0 30.3G 3.40G 0 18 4.26K 1.94M ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- zfs0 182G 20.3G 0 110 38.4K 10.4M c1t9d0 30.4G 3.38G 0 18 12.8K 1.74M c1t10d0 30.4G 3.38G 0 17 4.26K 1.75M c1t11d0 30.4G 3.38G 0 18 0 1.74M c1t12d0 30.4G 3.38G 0 17 8.53K 1.71M c1t13d0 30.4G 3.38G 0 18 0 1.70M c1t14d0 30.4G 3.38G 0 20 12.8K 1.77M ---------- ----- ----- ----- ----- ----- ----- bash-3.1$ uname -a SunOS mars 5.11 snv_52 sun4u sparc SUNW,Ultra-2 Note the complete lack of redundency ! Now then, I have a collection of six disks on controller c0 that I would like to now mirror with this ZPool zfs0. Thats the wrong way of thinking really. In the SVM world I would create stripes and then mirror them to get either RAID 0+1 or RAID 1+0 depending on various factors. With ZFS I am more likely to just create the mirrors on day one thus : # zpool create zfs0 mirror c1t9d0 c0t9d0 mirror c1t10d0 c0t10d0 ... etc but I don''t have that option now. The zpool exists as a simple stripe set at the moment. Or some similar analogy of a stripe set in the ZFS world. Now zpool(1M) says the following for either "add" or "attach" : SunOS 5.10 Last change: 31 Jul 2006 6 System Administration Commands zpool(1M) zpool add [-fn] pool vdev ... Adds the specified virtual devices to the given pool. The vdev specification is described in the "Virtual Dev- ices" section. The behavior of the -f option, and the device checks performed are described in the "zpool create" subcommand. -f Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. -n Displays the configuration that would be used without actually adding the vdevs. The actual pool creation can still fail due to insuffi- cient privileges or device sharing. zpool attach [-f] pool device new_device Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configura- tion, device automatically transforms into a two-way mirror of device and new_device. If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately. -f Forces use of new_device, even if its appears to be in use. Not all devices can be overridden in this manner. Note that "attach" has no option for -n which would just show me the damage I am about to do :-( So I am making a best guess here that what I need is something like this : # zpool attach zfs0 c1t9d0 c0t9d0 which would mean that the fist disk in my zpool would be mirrored and nothing else. A weird config to be sure but .. is this what will happen? I ask all this in painful boring detail because I have no way to backup this zpool other than tar to a DLT. The last thing I want to do is destroy my data when I am trying to add redundency. Any thoughts ? -- Dennis Clarke
On 07 January, 2007 - Dennis Clarke sent me these 6,1K bytes:> I have a pile of disks hanging off it on two controllers, c0 and c1.[..]> Note the complete lack of redundency ! > > Now then, I have a collection of six disks on controller c0 that I would > like to now mirror with this ZPool zfs0. Thats the wrong way of thinking > really. In the SVM world I would create stripes and then mirror them to get > either RAID 0+1 or RAID 1+0 depending on various factors. With ZFS I am > more likely to just create the mirrors on day one thus : > > # zpool create zfs0 mirror c1t9d0 c0t9d0 mirror c1t10d0 c0t10d0 ... etc > > but I don''t have that option now. The zpool exists as a simple stripe set > at the moment. Or some similar analogy of a stripe set in the ZFS world.[..]> So I am making a best guess here that what I need is something like this : > > # zpool attach zfs0 c1t9d0 c0t9d0 > > which would mean that the fist disk in my zpool would be mirrored and > nothing else. A weird config to be sure but .. is this what will happen?Why not just try it yourself? :) unterweser:/tmp# mkfile 64m file1a unterweser:/tmp# mkfile 64m file2a unterweser:/tmp# mkfile 64m file1b unterweser:/tmp# mkfile 64m file2b unterweser:/tmp# zpool create crap /tmp/file1a /tmp/file2a unterweser:/tmp# zpool attach crap /tmp/file1a /tmp/file1b unterweser:/tmp# zpool status crap pool: crap state: ONLINE scrub: resilver completed with 0 errors on Mon Jan 8 01:31:07 2007 config: NAME STATE READ WRITE CKSUM crap ONLINE 0 0 0 mirror ONLINE 0 0 0 /tmp/file1a ONLINE 0 0 0 /tmp/file1b ONLINE 0 0 0 /tmp/file2a ONLINE 0 0 0 errors: No known data errors unterweser:/tmp# zpool attach crap /tmp/file2a /tmp/file2b unterweser:/tmp# zpool status crap pool: crap state: ONLINE scrub: resilver completed with 0 errors on Mon Jan 8 01:31:18 2007 config: NAME STATE READ WRITE CKSUM crap ONLINE 0 0 0 mirror ONLINE 0 0 0 /tmp/file1a ONLINE 0 0 0 /tmp/file1b ONLINE 0 0 0 mirror ONLINE 0 0 0 /tmp/file2a ONLINE 0 0 0 /tmp/file2b ONLINE 0 0 0 errors: No known data errors /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On Sun, Jan 07, 2007 at 06:28:04PM -0500, Dennis Clarke wrote:> Now then, I have a collection of six disks on controller c0 that I would > like to now mirror with this ZPool zfs0. Thats the wrong way of thinking > really. In the SVM world I would create stripes and then mirror them to get > either RAID 0+1 or RAID 1+0 depending on various factors. With ZFS I am > more likely to just create the mirrors on day one thus : > > # zpool create zfs0 mirror c1t9d0 c0t9d0 mirror c1t10d0 c0t10d0 ... etc > > but I don''t have that option now. The zpool exists as a simple stripe set > at the moment. Or some similar analogy of a stripe set in the ZFS world. > > ... > > Note that "attach" has no option for -n which would just show me the damage > I am about to do :-(In general, ZFS does a lot of checking before committing a change to the configuration. We make sure that you don''t do things like use disks that are already in use, partitions aren''t overlapping, etc. All of the data integrity features in ZFS wouldn''t be worth much if we allowed an administrator to unintentionally destroy data.> So I am making a best guess here that what I need is something like this : > > # zpool attach zfs0 c1t9d0 c0t9d0 > > which would mean that the fist disk in my zpool would be mirrored and > nothing else. A weird config to be sure but .. is this what will happen?Yep, that''s exactly what will happen. Lather, rinse, repeat for the other disks in the pool, and you should be exactly where you want to be.> I ask all this in painful boring detail because I have no way to backup this > zpool other than tar to a DLT. The last thing I want to do is destroy my > data when I am trying to add redundency. > > Any thoughts ?What you figured out is exactly the right thing. If you decide you want to undo it, just use "zpool detach". --Bill
>> Note that "attach" has no option for -n which would just show me the >> damage I am about to do :-( > > In general, ZFS does a lot of checking before committing a change to the > configuration. We make sure that you don''t do things like use disks > that are already in use, partitions aren''t overlapping, etc. All of the > data integrity features in ZFS wouldn''t be worth much if we allowed an > administrator to unintentionally destroy data.which is why I am beginning to think of ZFS as the last filesystem I will need. But the head space transition is not easy for a guy that thrives on super stable technology. Like Solaris 8 :-)>> So I am making a best guess here that what I need is something like this : >> >> # zpool attach zfs0 c1t9d0 c0t9d0 >> >> which would mean that the fist disk in my zpool would be mirrored and >> nothing else. A weird config to be sure but .. is this what will happen? > > Yep, that''s exactly what will happen. Lather, rinse, repeat for the > other disks in the pool, and you should be exactly where you want to be.Okay .. phasars on stun and in I go .>> I ask all this in painful boring detail because I have no way to backup >> this >> zpool other than tar to a DLT. The last thing I want to do is destroy my >> data when I am trying to add redundency. >> >> Any thoughts ? > > What you figured out is exactly the right thing. If you decide you want > to undo it, just use "zpool detach". >The only reason that I asked is that there is no explicit EXAMPLE in the manpage that says HOW TO UPGRADE FROM STRIPE TO MIRRORED STRIPES or maybe something that says RAID 0+1 or RAID 1+0. Just a bit more info in the ZFS manpages because that is the first place any admin will look. Not an online PDF file somewhere. Often times all I have to see what is going on in my server is s DEC VT220 terminal. Dennis