I have a quick ZFS question. With most hardware raid controllers all the data and the info is stored on the disk. Therefore, the integrity of the data can survive a controller failure or the deletion of the LUN as long as it is recreated with the same drives in the same location. Does this kind of functionality exists within ZFS? For example, lets say I have a JOB full of disks connected to an server running OSOL and all the drives are formated as one big raidz volume. Now lets say I experience a hardware failure and I have to bring in a new server with a new installation of OSOL. Would I be able to put the raidz volume from the JBOD back together so I can see the original data? Thanks for any input. -- This message posted from opensolaris.org
JD Trout wrote:> I have a quick ZFS question. With most hardware raid controllers all > the data and the info is stored on the disk. Therefore, the integrity > of the data can survive a controller failure or the deletion of the > LUN as long as it is recreated with the same drives in the same > location. Does this kind of functionality exists within ZFS? > > For example, lets say I have a JOB full of disks connected to an > server running OSOL and all the drives are formated as one big raidz > volume. Now lets say I experience a hardware failure and I have to > bring in a new server with a new installation of OSOL. Would I be > able to put the raidz volume from the JBOD back together so I can see > the original data?The zpool metadata is also on the disk. As long as the disks are fine, you can reconnect them to another server and import them. ZFS will be able to find the zpools (in this case raidz volume). -Manoj
That is great to hear. What is the command to do this? I setup a test situation and I would like to give it a try. -- This message posted from opensolaris.org
On Mon, Mar 29, 2010 at 3:49 PM, JD Trout <jdtrout at ucla.edu> wrote:> That is great to hear. What is the command to do this? I setup a test > situation and I would like to give it a try. >If you can plan the removal, simply ''zpool export'' your pool, then ''zpool import'' it on the new controller / host. If you don''t do an export, use ''zpool import -f'' to force it. -B -- Brandon High : bhigh at freaks.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100329/2697b935/attachment.html>
Perfect. Thanks! -- This message posted from opensolaris.org
If you "zfs export" it will offline your pool. This is what you do when you''re going to intentionally remove disks from the live system. If you suffered a hardware problem, and you''re migrating your uncleanly-unmounted disks to another system, then as Brandon described below, you''ll need the "-f" to force the import. When you "zfs import" it does not matter if you''ve moved the disks around. What used to be connected to SATA port 0 can move to port 6 or whatever. Irrelevant. The data on disks says not only which pool each disk belongs to, but which position within the pool. This makes sense and is particularly important, because, suppose you have a pool in operation for some years, with hotspare. A disk fails, the hotspare is consumed, another disk fails, another hotspare consumed, and so on. Now you''ve got all your disks jumbled around in random order. And then your CPU dies so you need to move your disks to another system, and there''s no way for you to know which order the disks were in the pool. It''s important to be able to import the volume, with the disks all jumbled around in random order. From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Brandon High Sent: Monday, March 29, 2010 6:54 PM To: JD Trout Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] zfs recreate questions On Mon, Mar 29, 2010 at 3:49 PM, JD Trout <jdtrout at ucla.edu> wrote: That is great to hear. What is the command to do this? I setup a test situation and I would like to give it a try. If you can plan the removal, simply ''zpool export'' your pool, then ''zpool import'' it on the new controller / host. If you don''t do an export, use ''zpool import -f'' to force it. -B -- Brandon High : bhigh at freaks.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100330/b270dab8/attachment.html>
Thanks for the details Edward, that is good to know. Another quick question. In my test setup I created the pool using snv_134 because I wanted to see how things would run as the next release is supposed to be based off of snv_134 (from my understanding). However, I recently read that the 2010.03 release time is unknown and that things are kind of uncertain (is this true? Is there a link that provides info about the release schedule?). Anyway, my question is, I created the pool with snv_134 and since I need to get this hardware to production, I can''t wait for the new release and have to move forward with 2009.06. However, as expected I can''t import it because the pool was created with a newer version of ZFS. What options are there to import? Like I said, I don''t need the data, so I can blow out the pool and start over. However, I was curious to see how ZFS could handle this situation. Thanks guys, I really appreciate all the info. -- This message posted from opensolaris.org
> Anyway, my question is, [...] > as expected I can''t import it because the pool was created > with a newer version of ZFS. What options are there to import?I''m quite sure there is no option to import or receive or downgrade a zfs filesystem from a later version. I''m pretty sure your only option is something like "tar"