My power supply failed. After I replaced it, I had issues staying up after doing zpool import -f. I reinstalled OpenSolaris 134 on my rpool and still had issues. I have 5 pools: rpool - 1*37GB data - RAIDZ, 4*500GB data1 - RAID1 2*750GB data2 - RAID1 2*750GB data3 - RAID1 2*2TB - WD20EARS The system locks up everytime I try to import data3. I even tried exporting all except rpool to reduce the RAM usage. I have 3 GB RAM with a max of 4GB possible. I''d like to read the data off data3. From what I''m reading, the WD EARS are probably not the right drives to be using. -- This message posted from opensolaris.org
Hi Tom, Did you boot from the OpenSolaris LiveCD and attempt to manually mount the data3 pool? The import might take some time. I''m also curious whether the device info is coherent after the power failure. You might review the device info for the root pool to confirm. If the device info is okay, you might consider adding more memory to get data3 imported. This has helped others in past. Thanks, Cindy On 06/11/10 10:27, Tom Buskey wrote:> My power supply failed. After I replaced it, I had issues staying up after doing zpool import -f. > I reinstalled OpenSolaris 134 on my rpool and still had issues. > > I have 5 pools: > rpool - 1*37GB > data - RAIDZ, 4*500GB > data1 - RAID1 2*750GB > data2 - RAID1 2*750GB > data3 - RAID1 2*2TB - WD20EARS > > The system locks up everytime I try to import data3. > I even tried exporting all except rpool to reduce the RAM usage. > I have 3 GB RAM with a max of 4GB possible. > > I''d like to read the data off data3. From what I''m reading, the WD EARS are probably not the right drives to be using.
> Hi Tom, > > Did you boot from the OpenSolaris LiveCD and attempt > to manually > mount the data3 pool? The import might take some > time.I haven''t tried that. I am booting from a new install to the hard drive though.> > I''m also curious whether the device info is coherent > after the > power failure. You might review the device info for > the root > pool to confirm. >Wouldn''t that be ok with a fresh install?> If the device info is okay, you might consider adding > more memory > to get data3 imported. This has helped others in > past. >I''ve thought of that. I think the motherboard can only go to 4GB though. That''s why I exported the other zpools - to free up RAM. The "rule" is 1GB/TB right? I have about 4.5 TB with 3 GB RAM so I''m a bit over that rule.> Thanks, > > Cindy > On 06/11/10 10:27, Tom Buskey wrote: > > My power supply failed. After I replaced it, I had > issues staying up after doing zpool import -f. > > I reinstalled OpenSolaris 134 on my rpool and still > had issues. > > > > I have 5 pools: > > rpool - 1*37GB > > data - RAIDZ, 4*500GB > > data1 - RAID1 2*750GB > > data2 - RAID1 2*750GB > > data3 - RAID1 2*2TB - WD20EARS > > > > The system locks up everytime I try to import > data3. > > I even tried exporting all except rpool to reduce > the RAM usage. > > I have 3 GB RAM with a max of 4GB possible. > > > > I''d like to read the data off data3. From what I''m > reading, the WD EARS are probably not the right > drives to be using. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >-- This message posted from opensolaris.org
Tom, If you freshly installed the root pool, then those devices should be okay so that wasn''t a good test. The other pools should remain unaffected by the install, and I hope, from the power failure. We''ve seen device info get messed up during a power failure, which is why I asked. If you don''t have dedup enabled on data3, then the memory should be okay, but increasing memory has helped others in the past. Its just a suggestion. Thanks, Cindy On 06/11/10 14:44, Tom Buskey wrote:>> Hi Tom, >> >> Did you boot from the OpenSolaris LiveCD and attempt >> to manually >> mount the data3 pool? The import might take some >> time. > > I haven''t tried that. I am booting from a new install to the hard drive though. > >> I''m also curious whether the device info is coherent >> after the >> power failure. You might review the device info for >> the root >> pool to confirm. >> > > Wouldn''t that be ok with a fresh install? > >> If the device info is okay, you might consider adding >> more memory >> to get data3 imported. This has helped others in >> past. >> > > I''ve thought of that. I think the motherboard can only go to 4GB though. > > That''s why I exported the other zpools - to free up RAM. > > The "rule" is 1GB/TB right? I have about 4.5 TB with 3 GB RAM so I''m a bit over that rule. > > > >> Thanks, >> >> Cindy >> On 06/11/10 10:27, Tom Buskey wrote: >>> My power supply failed. After I replaced it, I had >> issues staying up after doing zpool import -f. >>> I reinstalled OpenSolaris 134 on my rpool and still >> had issues. >>> I have 5 pools: >>> rpool - 1*37GB >>> data - RAIDZ, 4*500GB >>> data1 - RAID1 2*750GB >>> data2 - RAID1 2*750GB >>> data3 - RAID1 2*2TB - WD20EARS >>> >>> The system locks up everytime I try to import >> data3. >>> I even tried exporting all except rpool to reduce >> the RAM usage. >>> I have 3 GB RAM with a max of 4GB possible. >>> >>> I''d like to read the data off data3. From what I''m >> reading, the WD EARS are probably not the right >> drives to be using. >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discu >> ss >>
> Tom, > > If you freshly installed the root pool, then those > devices > should be okay so that wasn''t a good test. The other > pools > should remain unaffected by the install, and I hope, > from > the power failure.Yes. I was able to import them and have since exported them.> > We''ve seen device info get messed up during a power > failure, > which is why I asked.Yep. Gotta cover the basics.> > If you don''t have dedup enabled on data3, then the > memory > should be okay, but increasing memory has helped > others in > the past. Its just a suggestion.I''m not sure I didn''t have dedup enabled. I might have. As it happens, the system rebooted and is now in single user mode. I''m trying another import. Most services are not running which should free ram. If it crashes again, I''ll try the live CD while I see about more RAM. -- This message posted from opensolaris.org
Tom Buskey
2010-Jun-29 15:12 UTC
[zfs-discuss] zpool import issue after a crash - Followup
> I''m not sure I didn''t have dedup enabled. I might > have. > As it happens, the system rebooted and is now in > single user mode. > I''m trying another import. Most services are not > running which should free ram. > > If it crashes again, I''ll try the live CD while I see > about more RAM.Success. I got another machine with 8GB of RAM. I installed the drives and booted from the b134 Live CD. Then I did a zpool import -f. 2-3 days later, it finished and I was able to transfer my data off the drives. Yay! I did not have dedup on. At one point, top showed that less then 1GB RAM was free. At another point, I could no longer SSH into the system so it probably used up most of the RAM. The console was also unresponsive at this point. At least it didn''t crash and was able to finish. One other data point - these are WD 20EARS drives w/o anything done for the 4k sectors which made them slower. The long recovery and RAM needed make me wary about putting too much zpool/RAM on a home system. And the WD 20EARS drives. These drives are for my Tivo server storage on a Linux box. I don''t care about losing a few bits so they''re going to to local to the Linux box w/ setup for the 4k sectors. Has Sun done any testing with zpool size/RAM? I''d guess that they aren''t that interested in bitty boxes w/ only 4GB of RAM. -- This message posted from opensolaris.org