Hi Folks, Situation :- x86 based solaris 11 express server with 2 pools (rpool / data) got fried. I need to recover the raidz pool "data" which consists of 5 x 1tb sata drives. Have individually checked disks with seagate diag tool, they are all physically ok. Issue :- new sandybridge based x86 machine purchased, attempted to rebuild with solaris 11 express but the onboard sata controllers can not be recognised by the o/s.... no disks found. Assumption - sol11exp does not have drivers yet for the sata controllers on the new motherboard. Solution needed that allows me to build a functional NAS on the new hardware which will allow me to reconstitute and read the raidz zpool "data". Any thoughts? My latest thoughts are 1) to try freebsd as an alternative o/s hoping it has more recently updated drivers to support the sata controllers. According to the zfs wiki, freebsd 8.2 supports zpool version 28. I have a concern that when i updated the old (fried) server to sol11exp it upgraded the zpool version to 31 and so freebsd8.2 still may not be able to read the zpool on the raidz1 2) install windoze to get full hardware support (the drivers that came with the motherboard are windows only) and run sol11exp in a virtualbox environment which has full access to the raidz disks. Not sure if this is possible, but maybe worth a try. Any help / suggestions to recover my data would be appreciated. Regards Rep -- This message posted from opensolaris.org
Well, actually you''ve scored a hit on both ideas I had after reading the question ;) One more idea though: is it possible to change the disk controller mode in BIOS i.e. to a generic IDE? Hopefully that might work, even if sub-optimal... AFAIK FreeBSD 8.x is limited to "stable" ZFSv15, and "experimental" ZFSv28 is worked on in FreeBSD-9.0-CURRENT. Didn''t have any firsthand experience, so maybe things changed. Also if the hardware would work in another OS (like FreeBSD or Linux) you may be able to run VirtualBox or a free version of VMWare with dedicated disks inside of that - if that''s preferential for some reason (i.e. licensing). You are however locked into using sol11x with pool v31. Hey experts: does OpenIndiana dev-151 support ZFSv31 too, in a manner compatible with sol11x? 2011-07-12 5:40, Brett ?????:> Hi Folks, > > Situation :- x86 based solaris 11 express server with 2 pools (rpool / data) got fried. I need to recover the raidz pool "data" which consists of 5 x 1tb sata drives. Have individually checked disks with seagate diag tool, they are all physically ok. > > Issue :- new sandybridge based x86 machine purchased, attempted to rebuild with solaris 11 express but the onboard sata controllers can not be recognised by the o/s.... no disks found. Assumption - sol11exp does not have drivers yet for the sata controllers on the new motherboard. > > Solution needed that allows me to build a functional NAS on the new hardware which will allow me to reconstitute and read the raidz zpool "data". > > Any thoughts? > > My latest thoughts are > 1) to try freebsd as an alternative o/s hoping it has more recently updated drivers to support the sata controllers. According to the zfs wiki, freebsd 8.2 supports zpool version 28. I have a concern that when i updated the old (fried) server to sol11exp it upgraded the zpool version to 31 and so freebsd8.2 still may not be able to read the zpool on the raidz1 > > 2) install windoze to get full hardware support (the drivers that came with the motherboard are windows only) and run sol11exp in a virtualbox environment which has full access to the raidz disks. Not sure if this is possible, but maybe worth a try. > > Any help / suggestions to recover my data would be appreciated. > > Regards Rep
On Mon, 11 Jul 2011, Brett wrote:> 1) to try freebsd as an alternative o/s hoping it has more recently > updated drivers to support the sata controllers. According to the > zfs wiki, freebsd 8.2 supports zpool version 28. I have a concern > that when i updated the old (fried) server to sol11exp it upgraded > the zpool version to 31 and so freebsd8.2 still may not be able to > read the zpool on the raidz1Normally Solaris does not upgrade the version of existing pools. Upgrading the pool is an explicit administrative action. If no one has explicitly upgraded the pool versions, then they should match whatever version they were created with. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
You could buy an LSI2008 based JBOD sata card. It has typically 8 sata ports. LSI2008 works directly on S11E, out of the box. That card gives very good performance, typically close to 1GB/Sec transfer speed. And when you switch mobo, just bring the LSI2008 card to the new mobo, and you are set. Never more worries about compatibility. Read reviews on the LSI2008, it is a high end card. If you go the LSI2008 route, avoid raid functionality as it messes up ZFS. Flash the BIOS to JBOD mode. -- This message posted from opensolaris.org
On Wed, Jul 13, 2011 at 6:32 AM, Orvar Korvar <knatte_fnatte_tjatte at yahoo.com> wrote:> If you go the LSI2008 route, avoid raid functionality as it messes up ZFS. Flash the BIOS to JBOD mode.You don''t even have to do that with the LSI SAS2 cards. They no longer ship alternate IT-mode firmware for these like they did for the 1068e and others. As long as you don''t configure any RAID volumes, the card will attach to the non-RAID mpt_sas driver in Solaris and you''ll be all set. Eric
Ok, I went with windows and virtualbox solution. I could see all 5 of my raid-z disks in windows. I encapsulated them as entire disks in vmdk files and subsequently offlined them to windows. I then installed a sol11exp vbox instance, attached the 5 virtualized disks and can see them in my sol11exp (they are disks #1->#5). root at san:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 <ATA -VBOX HARDDISK -1.0 cyl 26105 alt 2 hd 255 sec 63> /pci at 0,0/pci8086,2829 at d/disk at 0,0 1. c7t2d0 <ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,2829 at d/disk at 2,0 2. c7t3d0 <ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,2829 at d/disk at 3,0 3. c7t4d0 <ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,2829 at d/disk at 4,0 4. c7t5d0 <ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,2829 at d/disk at 5,0 5. c7t6d0 <ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,2829 at d/disk at 6,0 Specify disk (enter its number): Great I thought, all i need to do is import my raid-z..... root at san:~# zpool import root at san:~# Damn, that would have been just too easy I guess. Help !!! How do i recover my data? I know its still hiding on those disks. Where do i go from here? Thanks Rep -- This message posted from opensolaris.org
On Tue, Jul 19, 2011 at 4:29 PM, Brett <repudi8or at gmail.com> wrote:> Ok, I went with windows and virtualbox solution. I could see all 5 of my raid-z disks in windows. I encapsulated them as entire disks in vmdk files and subsequently offlined them to windows. > > I then installed a sol11exp vbox instance, attached the 5 virtualized disks and can see them in my sol11exp (they are disks #1->#5). > > root at san:~# format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > ? ? ? 0. c7t0d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 26105 alt 2 hd 255 sec 63> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 0,0 > ? ? ? 1. c7t2d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 60798 alt 2 hd 255 sec 126> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 2,0 > ? ? ? 2. c7t3d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 60798 alt 2 hd 255 sec 126> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 3,0 > ? ? ? 3. c7t4d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 60798 alt 2 hd 255 sec 126> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 4,0 > ? ? ? 4. c7t5d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 60798 alt 2 hd 255 sec 126> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 5,0 > ? ? ? 5. c7t6d0 <ATA ? ?-VBOX HARDDISK ?-1.0 ?cyl 60798 alt 2 hd 255 sec 126> > ? ? ? ? ?/pci at 0,0/pci8086,2829 at d/disk at 6,0 > Specify disk (enter its number): > > Great I thought, all i need to do is import my raid-z..... > root at san:~# zpool import > root at san:~# > > Damn, that would have been just too easy I guess. Help !!! > > How do i recover my data? I know its still hiding on those disks. Where do i go from here?What does "zdb -l /dev/dsk/c7t6d0s0" or "zdb -l /dev/dsk/c7t6d0p1" show? -- Fajar
root at san:~# zdb -l /dev/dsk/c7t6d0s0 cannot open ''/dev/rdsk/c7t6d0s0'': I/O error root at san:~# zdb -l /dev/dsk/c7t6d0p1 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 root at san:~# -- This message posted from opensolaris.org
Roy Sigurd Karlsbakk
2011-Jul-19 18:44 UTC
[zfs-discuss] recover raidz from fried server ??
> Hey experts: does OpenIndiana dev-151 support ZFSv31 too, in a manner > compatible with sol11x?OpenIndiana b151/Illumos would support zpool v31 if Oracle would release that code. Since that doesn''t seem to happen, or haven''t happend yet, the chances are rather low for b151 to get it. The current build is still at zpool v28, but with a bunch of fixes as compared to the old one from osol Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
Roy Sigurd Karlsbakk
2011-Jul-19 18:46 UTC
[zfs-discuss] recover raidz from fried server ??
Could you try to just boot up fbsd or linux on the box to see if zfs (native or fuse-based, respecively) can see the drives? roy ----- Original Message -----> root at san:~# zdb -l /dev/dsk/c7t6d0s0 > cannot open ''/dev/rdsk/c7t6d0s0'': I/O error > root at san:~# zdb -l /dev/dsk/c7t6d0p1 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > root at san:~# > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
On Wed, Jul 20, 2011 at 1:46 AM, Roy Sigurd Karlsbakk <roy at karlsbakk.net> wrote:> Could you try to just boot up fbsd or linux on the box to see if zfs (native or fuse-based, respecively) can see the drives?Yup, that might seem to be the best idea. Assuming that all those drives are the original drives with raidz, and it originally has pool version 28 or lower, zfsonlinux should be able to see it. You can test it using Ubuntu Live CD and download/compile the additional zfs module. If you''re interested see https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem , stop just before Step 2, and try to do "zpool import". Then again, vbox on top of windows SHOULD detect the disk as well. Are you sure you export the WHOLE disk, and not a partition? If you still have that setup, try again but this time testing with different slice and parition (i.e. test "zdb -l" for all /dev/dsk/c7t6d0*, or whatever your original radiz disk is now recognized at) -- Fajar> > roy > > ----- Original Message ----- >> root at san:~# zdb -l /dev/dsk/c7t6d0s0 >> cannot open ''/dev/rdsk/c7t6d0s0'': I/O error >> root at san:~# zdb -l /dev/dsk/c7t6d0p1 >> -------------------------------------------- >> LABEL 0 >> -------------------------------------------- >> failed to unpack label 0 >> -------------------------------------------- >> LABEL 1 >> -------------------------------------------- >> failed to unpack label 1 >> -------------------------------------------- >> LABEL 2 >> -------------------------------------------- >> failed to unpack label 2 >> -------------------------------------------- >> LABEL 3 >> -------------------------------------------- >> failed to unpack label 3